(C. Umiltà )
This posting investigates the bias observed in ML searches on CMB-S4 simulations. In a recent posting we have seen that, even for a simple "Gaussian foreground" + ideal lensing template case, we observe a bias in the ML search results. As observed in here
, the mean of the bandpowers of the simulations does not correspond to the expectation value, i.e., our theoretical model. We first thought this could be the source of the obsered bias. However, even when we adjust for that by subtracting the bandpower deviation from the simulations, a residual bias is observed. So, where does this bias come from?I prepare various sets of bandpowers generated as Gaussian realizations with the expectation value as mean and the bandpower covariance matrix as std. I scale the std by different fractions (from 10% to 100%) to see how the bias evolves with a wider distribution of the bandpowers around the mean.
Fig. 1 shows the results of the searches. As in the standard analysis, I do 1000 realizations for each set. I consider two cases: I generate a new bandpowers for each of the 1000 realizations, or I generate new bandpowers for each of the 1000 realizations AND the 10 different fractions of the bpcm. In the former case it is easier to visualize the bias evolution. For the theoretical model I choose \(A_L=1\). For the analysis I just consider the case of no decorrelation.
The observed bias seems to increase smoothly with increasing covariance.