B. Racine

In this posting we run the maximum likelihood search on the DC4, removing the 85GHz and 145GHz channels.

The goal is to assess some benefits of the split-band strategy. Since we did not create a new dataset redistribution the effort in joint bands,
we will have a decrease in sensitivity. This posting is mostly focusing on the possible bias increase when dropping the split band.

This posting is a modification of the analysis of CMB-S4 Data Challenge 04 using a BICEP/Keck-style parametric multicomponent analysis, reported in this posting, where we also described the parameterization, studied the dependence on priors, presented the simulations etc.

In the DC4 challenge, we used 9 bands (20, 30, 40, 85, 95, 145, 155, 220 and 270 GHz), specified here, and plotted here.
Splitting the "90" and "150" bands should help to check the possible foreground residuals, but it's importance will strongly depend on the foreground
model complexity.

The band splitting was studied with the performance-based Fisher forecasting code in this posting.
Switching between the "optimal" and "force-split" path, you can see that it prefers split band when we reach the sensitivity \(\sigma(r) = 5\times 10^{-4}\).

In the current posting, we study the effect of the band splitting on the bias on the recovered r for 10 different sky models.

In Figure 1, we summarize the r results, for the full frequency coverage and for the case where we drop the 85 and 145GHz channel.
We show it for different values of \(A_L\), which is a crude approximation to delensing performances.

We also show the L=-2 log(Likelihood)/dof, where the Likelihood is the one returned by the
minuit minimizer and dof is the number of degrees of freedom, i.e. \(N_{\rm bandpowers} * N_{\rm channels} * (N_{\rm channels}+1)/2 - N_{\rm param}\).
Here \(N_{\rm bandpowers}=9\), \(N_{\rm params}=10\) and \(N_{\rm channels}\) is either 9 or 7.
It is not exactly a reduced \(\chi^2\), since we are not using a Gaussian likelihood (we use Hammimeche-Lewis), but it is a good approximation.

For a description of the sky models used, see this posting.

For a table with the numbers plotted here, see this table.

Comments on the figure:

- Some of these models (4, 8, 9) have biases that don't disappear when we take into account decorrelation. Note that these biases depend on the region observed and would be bigger for the Chile mask for instance (see here).
- Switching between the full frequency coverage to the case with no 85GHz and no 145GHz, we see an increase in \(\sigma(r)\), as expected but no significant shifts in the biases. Note nevertheless that the bias for model 9 gets worse, whereas it gets reduced for model 4.
- Model 5, which has strong decorrelation, doesn't have any biases, since we are using the same model in for the fit. It is hard to tell if we do better for \(\sigma(r)\) since we are not comparing similar efforts here.
- When looking at L=-2log(Likelihood)/dof, we have a wider spread in the case where we have no 85GHz and no 145GHz band, due to the fact that we have reduced degrees of freedom. This probably explains why we seem to have more evidence of failure than in the full frequency coverage. The significance of the failure stays roughly constant though, except maybe for model 9. It is probably due to the fact that the departure from our model is not dominated by the 85-155Ghz region.

For the 10 foreground models considered in our data challenge, we don't see significant bias increase when removing the 85 and 145GHz channels. Note that except for model 5, most models don't have a strong decorrelation, as can be seen here. It is hard to assess how helpful the band splitting would be in the case of strong decorrelation, since we are not comparing cases with a similar effort.