B. Racine

In this posting, we show that the mean of the simulations deviates from the expectation value of the input fiducial model. We then try two corrections, one multiplicative and one additive. This reduces the bias at ML parameter levels for some parameters but there is still some significant deviation.

In the DC4 ML search results, we still see residuals at the parameter levels, especially for \(A_L\) and for the \(\alpha\)'s. Here we check if the bias is present at the bandpower level.

In this posting, we see that when inputing some expectation value corresponding to some given parameters, the ML search recovers the proper values.

We compute the bandpower deviation, shown in figure 1 as follows:

- We load all the DC04, model 00, simulations' bandpowers.
- We mask the bad simulations from this array.
- We compute the mean of the bandpowers, separately for the case \(r=0\) and \(r=0.003\).
- We compute the standard deviation from these same bandpowers: std(bp).
- We compute the expectation value \(bp_{in}\) corresponding to the model that was used for these simulations (for figure 1, in the case of no decorrelation, \(A_L=0.1\), but for figure 2, all case are used).
- We compute the bandpower deviation as \((bp-bp_{in})/std(bp)\) for all bandpowers.
- In the plot, we show the mean of these bandpower deviations as well as the std divided by \(\sqrt{N_{sims}}\), so that we effectively see how significant the deviation of the mean is.
**So a point that is 1 sigma away from 0 is a deviation of the mean vs the input fiducial model at 1 sigma, detected at a significance visible via the errorbar.** - We report the same deviations after we removed the bias, either by subtracting or by dividing out the correction, which of course then vanishes, by definition.

Comments on the figure:

- The residuals have a significant deviation from 0, and it seems to have a strong dependence on the beam (remember that 20GHz channel has a tight beam, similar to the high frequency ones. The \(\ell\) dependence might also point towards a beam mmissmatch. A simple plot of the bandpower window functions doesn't reveal obvious issues. This needs to be investigated more.

We then correct each simulation's bandpower before using the multicomponent ML search, either by subtracting (mean(bp_{sims}) - bp_{in})), or by dividing them by (mean(bp_{sims})/bp_{in})). This forces the mean of the sims to fit the input fiducial model. We also ran a case where we fixed the \(\alpha\) to their fiducial values in the ML search.

Comments on the figure:

- When using an additive correction, we can see that the bias on \(\alpha\)'s and \(A_L\) is reduced and the histograms still look decent. We still have a weak but significantly detected bias on the foreground amplitudes.
- Note that for the bias plot, we report in terms of the \(\sigma\) of the mean. So this is the standard deviation of the histogram, divided by \(\sqrt{N_{sims}}\). A deviation at the 4 \(\sigma\) level is a negligible bias, but detected at 4 \(\sigma\) significance.
- For the multiplicative correction, the bias plot looks similar, but note that the histograms show a wider spread of the parameter recovery.

We also plot the -2*log(Likelihood) value for all the 1000 simulations (r=0 and r=0.003 altogether) for the different values of \(A_L\). We see that the distribution shifts slightly to lower numbers when we remove the additive bias, but goes bad when we use the additive bias.