Maximum likelihood search results for Data Challenge 04, models 0 to 9

  B. Racine

Now includes models 7, 8 and 9 (only 300 simulations).


Introduction

This posting summarizes results from analysis of CMB-S4 Data Challenge 04 using a BICEP/Keck-style parametrized foreground model.
There are a few additional changes compared to the DC02 analysis:
we fixed the lensing spectrum to the input one. In the previous posting it uses a lensing spectrum that differs from the input,
we zero out the theory spectrum below ell=30, since that's what was done in the sims.

The current posting is an update over this previous DC04 posting, where we introduced a cut to remove outliers in the likelihood values due to flawed simulations.
We added three sky models, described below.
As in the previous posting, we now use power spectra that have been re-generated to fix a bug, as shown in this posting.

In section 1, we show the main results in the form of figures and histograms including foreground parameters.
In section 2, we report tables of r constraints for the different sky models, for different lensing residuals, with and without decorrelation in the ML search.

To-do: In the next run, we will probably release the priors on the foreground parameters.

Note about the model:
In this analysis, for each realization, we find the set of model parameters that maximizes the likelihood multiplied by priors on the dust and sync spectral index parameters (\(\beta_d\) and \(\beta_s\)). These priors are based on Planck data, so they are quite weak in comparison with CMB-S4 sensitivity. However, in principle foreground models may violate them potentially leading to biases (e.g. model 03 where the preferred value of \(\beta_d\) is outside the prior range - see below, Figure 2).

The model includes the following parameters:

For the decorrelation model, we assume that the cross-spectrum of dust between frequencies \(\nu_1\) and \(\nu_2\) is reduced by factor \(\exp\{log(\Delta_d) \times [\log^2(\nu_1 / \nu_2) / \log^2(217 / 353)] \times f(\ell)\}\). For the \(\ell\) dependence we fix the scaling to take a linear form (pivot scale is \(\ell\)=80).

Note about the simulations: (see section 2 for more detail.)

1: Summary Plots

In Figure 1, we summarize the r results.
Some of these models produce strong biases, especially model 8, which still has a very significant bias even when we take into account decorrelation in the model. See section 2 for discussions.

Figure 1: Summary of the results for different values of \(A_L\), in red, the analysis of the simulations with r=0.003, in green, the r=0 case. On the left, the case without decorrelation in the parametrization, on the right, with linear-\(\ell\) decorrelation. The outer error bars show the standard deviation \(\sigma\) of the \(N_{sims}\) simulations' ML results ( \(N_{sims}\simeq 500\)), and the inner error bars show \(\sigma/\sqrt{N_{sims}}\).

Note that for model 4, \(\beta_d\) is in slight tension with the imposed prior: Gaussian centered at 1.6 with width 0.11. Similarly for \(\beta_s\) in model 6, (compared to the Gaussian centered at -3.1 with width 0.3). Model 8 also seem to prefer a higher value of \(\beta_d\).

Figure 2: Figure representing the ML parameters histograms for models 04.00 to 04.09.
In the histograms, we report the mean of the distribution as a red line, and the fiducial input value as a black line (if defined, see this table for 04.00). We also report the mean±standard deviation of the distribution.

2: DC4 results table

00: Gaussian foregrounds

The mean values and standard deviations of \(r\) for simulations with simple Gaussian foregrounds are summarized in Table 00. With a 10% lensing residual, we don't quite achieve \(\sigma(r) = 5 \times 10^{-3}\) for sims with \(r = 0\).

Turning on dust decorrelation in the model doesn't cause any bias in \(r\) and the recovered \(\Delta_d\) values are centered around 1 (i.e. analysis recovers zero decorrelation). Adding this parameter does increase \(\sigma(r)\) somewhat.

Table 00:
Mean \(r \times 10^3\) and \(\sigma(r) \times 10^3\) from sets of 500 realizations with simple Gaussian foregrounds.
Decorrelation model \(A_L\) = 1 \(A_L\) = 0.3 \(A_L\) = 0.1 \(A_L\) = 0.03
Input \(r\) = 0
none-0.804±2.489-0.262±0.897-0.098±0.439-0.036±0.275
linear -0.787±2.554-0.211±1.003-0.047±0.565-0.009±0.404
Input \(r\) = 0.003
none 2.102±2.6182.675±1.0832.868±0.6182.960±0.445
linear2.180±2.8122.768±1.2962.954±0.8113.010±0.609

When comparing these results to the old posting, we get an improvement on \(\sigma(r)\) due to the outlier rejection. We report these decrease in % in Table 00bis.

Table 00bis:
Decrease in the \(\sigma(r)\) due to the new loglike cut (the new bandpass convention has negligible effect). We report 100x(\(\sigma(r)_{new}\)-\(\sigma(r)_{old}\))/\(\sigma(r)_{old}\).
Decorrelation model \(A_L\) = 1 \(A_L\) = 0.3 \(A_L\) = 0.1 \(A_L\) = 0.03
Input \(r\) = 0
none-15.7%-22.5%-18.4%-10.1%
linear -8.8%-11.6% -7.3% -3 %
Input \(r\) = 0.003
none-9.2%-16.5%-16.4%-12.7%
linear-5.6%-12 %-13 %-10.6%
Table 0ter:
Fiducial model parameter values.
r \(A_d\) \(\beta_d\) \(A_s\) \(\beta_s\) \(\alpha_d\) \(\alpha_s\) \(\epsilon\) \(\Delta_d\)
0/0.003 4.25 \(\mu K^2\) 1.6 3.8 \(\mu K^2\) -3.1 -0.4 -0.6 0 0

01: PySM a1d1s1f1

As has been previously noted, dust power is much higher in this model (\(A_d \sim 12.5 \mu K^2\)) than for the Gaussian foreground sims (\(A_d = 4.25 \mu K^2\)). The PySM d1 dust model does feature a spatially varying spectral index, but we don't find any detectable decorrelation in this analysis. The PySM s1 synchrotron model yields \(A_s \sim 0.5 \mu K^2\) and there is \(\sim 6\)% correlation between dust and sync.

Note here that the value of \(\sigma(r)\) doesn't change much compared to the 04.00 case despite having a higher dust level. This is probably due to the fact that we only have one realization of the foreground sky (see note in the introduction), thus no impact from cosmic variance.

Table 01:
Mean \(r \times 10^3\) and \(\sigma(r) \times 10^3\) from sets of 500 realizations with PySM a1d1f1s1 foregrounds.
Decorrelation model \(A_L\) = 1 \(A_L\) = 0.3 \(A_L\) = 0.1 \(A_L\) = 0.03
Input \(r\) = 0
none 0.225±2.5220.593±0.9310.590±0.4710.526±0.306
linear 0.089±2.6380.494±1.1000.479±0.6380.387±0.445
Input \(r\) = 0.003
none 3.139±2.6093.524±1.0943.569±0.6483.563±0.483
linear3.040±2.7683.469±1.2973.507±0.8453.462±0.654

02: PySM a2d4f1s3

The d4 version of PySM dust adds a second dust component (with different blackbody temperature and emissivity power law) based on Meisner & Finkbeiner (2014). Not sure what type of \(\beta_d\) spatial variations are included in this model, but Colin thinks it is more or less the same as for d1. The s3 synchrotron model adds curvature to the synchrotron spectral index: \(\beta_s \rightarrow \beta_s + C \ln (\nu / \nu_C)\). The a2 AME model uses a 2% polarization fraction for AME, which seems very high, but there is no attempt to model AME in this analysis.

Results for this model show that \(A_d\) is even larger (\(\sim 32.5 \mu K^2\)) than for the d1 dust model. The mean value of \(\beta_d\) decreases from 1.59 (for PySM d1 model) to 1.55, which is probably a sign of the two component dust. The mean value of \(\beta_s\) decreases from -3.05 (for PySM s1 model) to -3.13, which is probably due to synchrotron spectral curvature (and perhaps polarized AME?). Dust–sync correlation is higher, at \(\sim 10\)%, which could be from polarized AME.

Note here that the value of \(\sigma(r)\) doesn't change much compared to the 04.00 case despite having a much higher dust level. This is probably due to the fact that we only have one realization of the foreground sky (see note in the introduction), thus no impact from cosmic variance.

Table 02:
Mean \(r \times 10^3\) and \(\sigma(r) \times 10^3\) from sets of 500 realizations with PySM a2d4f1s3 foregrounds.
Decorrelation model \(A_L\) = 1 \(A_L\) = 0.3 \(A_L\) = 0.1 \(A_L\) = 0.03
Input \(r\) = 0
none -0.230±2.5480.210±0.954 0.301±0.4780.316±0.300
linear -0.542±2.716-0.001±1.1560.154±0.6590.177±0.445
Input \(r\) = 0.003
none 2.603±2.5383.142±1.0783.281±0.6453.335±0.480
linear 2.348±2.6602.981±1.2353.162±0.7953.196±0.613

03: PySM a2d7f1s3

The next PySM version uses the Hensley/Draine dust model, which has additional complexity in the dust SED (perhaps described in arXiv:1709.07897?). The level of dust power is similar to sky model 01 (PySM d1 model), but we find that the emissivity power law is even flatter than the last case, with \(\beta_d \sim 1.44\).

The recovered means seem quite wacky, and \(A_L\) dependent.

Note here that the value of \(\sigma(r)\) doesn't change much compared to the 04.00 case despite having a higher dust level. This is probably due to the fact that we only have one realization of the foreground sky (see note in the introduction), thus no impact from cosmic variance.

Table 03:
Mean \(r \times 10^3\) and \(\sigma(r) \times 10^3\) from sets of 500 realizations with PySM a2d7f1s3 foregrounds.
Decorrelation model \(A_L\) = 1 \(A_L\) = 0.3 \(A_L\) = 0.1 \(A_L\) = 0.03
Input \(r\) = 0
none -0.073±2.5100.431±0.921 0.512±0.4720.493±0.313
linear-0.972±2.643-0.242±1.1340.028±0.6750.103±0.476
Input \(r\) = 0.003
none 3.102±2.6303.484±1.1113.544±0.6633.542±0.498
linear 2.232±2.7482.826±1.3163.061±0.8723.135±0.680

04: Ghosh dust model

The Ghosh dust model (described here) is based on GASS HI data with a model for the Galactic magnetic field. For these sims, it is combined with the PySM a2, f1, and s3 components (same as the two previous models).

The analysis of this model yields smaller still values of \(\beta_d \sim 1.3-1.4\). Dust-sync correlation is still present, but smaller (2–3%), which is probably due to the fact that the Ghosh dust sims don't know anything about the PySM synchrotron or AME components. The fact that they are correlated at all probably happens because both models are based on data at larger scales.

Dust decorrelation is small in absolute terms, but detected at high significance. Using a model without dust decorrelation leads to a large positive bias on \(r\) in the range \(4-5 \times 10^{-3}\). Dust decorrelation with linear \(\ell\) scaling produces the smallest biases, but still quite large compared to other sky models.

Table 04:
Mean \(r \times 10^3\) and \(\sigma(r) \times 10^3\) from sets of 500 realizations with Ghosh dust model plus PySM a2f1s3 foregrounds.
Decorrelation model \(A_L\) = 1 \(A_L\) = 0.3 \(A_L\) = 0.1 \(A_L\) = 0.03
Input \(r\) = 0
none 2.177±2.644 3.196±1.189 4.105±0.778 5.251±0.660
linear-1.080±2.723-0.380±1.280-0.623±0.813-0.898±0.617
Input \(r\) = 0.003
none 5.596±3.0386.657±1.4327.837±0.9949.371±0.903
linear2.088±3.0362.770±1.5042.552±1.0322.261±0.841

05: Gaussian decorrelated dust model

This model has extremely large dust decorrelation (15% between 217 and 353 GHz at \(\ell\) = 80) and it exactly follows the assumed functional form of decorrelation with linear \(\ell\) scaling, so we can still draw some useful conclusions.

When we choose decorrelation with linear \(\ell\) scaling to match the sims, then we find no bias on \(r\) and recover \(\Delta_d\) = 0.85.

An important point to note from this model is that, even for the unbiased case where the decorrelation is correctly modeled in both \(\nu\) and \(\ell\), we find \(\sigma(r) \sim 1.4\), much larger than the target sensitivity of CMB-S4. This shows that, for extreme levels of foreground decorrelation, we lose the ability to clean foregrounds from the maps because the foreground modes are significantly independent between the various CMB-S4 frequencies. Regardless of whether you are doing map-based cleaning or fitting the power spectra as we do here, the only way to improve sensitivity would be use more observing bands that are more closely spaced. It also makes the point that our Fisher forecasts should assume some non-zero level of decorrelation. Adding decorrelation as a free parameter to a forecast that assumes \(\Delta_d = 1\) only captures part of the statistical penalty.

Table 05:
Mean \(r \times 10^3\) and \(\sigma(r) \times 10^3\) from sets of 50 realizations with Gaussian decorrelated dust foreground.
Decorrelation model \(A_L\) = 1 \(A_L\) = 0.3 \(A_L\) = 0.1 \(A_L\) = 0.03
Input \(r\) = 0
none34.442±6.45631.533±3.85934.083±3.35942.910±3.300
linear-0.825±3.233-0.272±1.722-0.078±1.284-0.003±1.118
Input \(r\) = 0.003
none38.010±7.05535.224±4.25738.041±3.61347.093±3.464
linear1.926±3.346 2.632±1.826 2.908±1.412 3.027±1.284

06: Flauger MHD foregrounds

Our understanding is that this model uses MHD simulations to consistently model polarized dust and synchrotron in the Galactic magnetic field. This makes it quite interesting that this analysis finds negative dust-sync correlation with \(\epsilon \sim -0.36\). The dust power is similar to the Gaussian sims, and \(\beta_d\) matches the Planck value of 1.59. This analysis finds a synchrotron SED power law that is much flatter than usual, \(\beta_s \sim -2.6\), which is inconsistent with the prior at about \(1.5 \sigma\).

This model does not show any significant dust decorrelation. In general, the results for this model look nearly as good as the simple Gaussian foregrounds (sky model 00).

Table 06:
Mean \(r \times 10^3\) and \(\sigma(r) \times 10^3\) from sets of 500 realizations with Flauger MHD foreground model.
Decorrelation model \(A_L\) = 1 \(A_L\) = 0.3 \(A_L\) = 0.1 \(A_L\) = 0.03
Input \(r\) = 0
none -0.748±2.398-0.191±0.916-0.035±0.4840.025±0.321
linear -0.546±2.4920.018±1.071 0.137±0.654 0.134±0.476
Input \(r\) = 0.003
none 2.406±2.9382.883±1.2393.023±0.7083.081±0.500
linear2.621±3.0103.111±1.3613.218±0.8553.209±0.655

07: Amplitude modulated Gaussian foregrounds.

This model is described here. It is a modified version of model 00, where the brightness of the dust varies across the sky. It does not include any decorrelation. It was mostly developped to study the effect of mask variations, shown in this separate posting.

Table 07:
Mean \(r \times 10^3\) and \(\sigma(r) \times 10^3\) from sets of 150 realizations with Flauger MHD foreground model.
Decorrelation model \(A_L\) = 1 \(A_L\) = 0.3 \(A_L\) = 0.1 \(A_L\) = 0.03
Input \(r\) = 0
none -0.949±2.509-0.316±0.900-0.128±0.427-0.056±0.254
linear-0.950±2.620-0.249±1.062-0.054±0.599-0.014±0.417
Input \(r\) = 0.003
none 2.438±2.5572.787±1.0782.904±0.6352.959±0.466
linear2.627±2.7232.967±1.3223.051±0.8853.049±0.693

08: MKD model (3D multi-layer).

Model 8 has been developped by Martinez-Solaeche, Karakci, Delabrouille, (see this paper). As described in their abstract: This is a "three-dimensional model of polarised galactic dust emission that takes into account the variation of the dust density, spectral index and temperature along the line of sight, and contains randomly generated small scale polarisation fluctuations. The model is constrained to match observed dust emission on large scales, and match on smaller scales extrapolations of observed intensity and polarisation power spectra.". It is based on a multi-layer model where \(T_d\), \(\beta_d\) and the optical depth \(\tau\) is defined in each layer, constrained by Planck, IRAS, some 3D dust extinction maps. A simple model of the galactic magnetic field is used to generate the large scale polarization. For the small scales, some Gaussian random I, E and B based on Planck observed power spectra are generated in each layers. It is then extrapolated at different frequencies, based on random realizations of \(\beta_d\) and \(T_d\) in the different layers, defining the dust SED.

This model naturally produces dust decorrelation, due to a varying SED on the sky. It is also expected to produce a flattening at low frequency, as is briefly reported in figure 19 of the paper. This might be explaining the large bias we observe on r, reduced but still present at high significance when including a decorrelation parameter.

Table 08:
Mean \(r \times 10^3\) and \(\sigma(r) \times 10^3\) from sets of 150 realizations with multi-layer MKD result.
Decorrelation model \(A_L\) = 1 \(A_L\) = 0.3 \(A_L\) = 0.1 \(A_L\) = 0.03
Input \(r\) = 0
none5.054±2.7894.741±1.1284.106±0.6293.535±0.432
linear2.948±2.8732.792±1.2922.408±0.8442.071±0.661
Input \(r\) = 0.003
none 8.045±2.7097.584±1.2747.004±0.8266.540±0.634
linear6.094±2.7995.803±1.4965.501±1.0925.260±0.896

09: Vansyngel model.

Model 9 has been developped by Vansyngel et al. (see this paper, and this posting). In this model, each layer has the same intensity (constrained by the Planck intensity map), but different magnetic field realizations. It produces Q and U maps by integrating along the LOS over these multiple layers of magnetic fields. This magnetic field, contrary to the previous model, is simulated down to small turbulent scales, which produce more physically motivated non-Gaussian fluctuations in the maps (down to small scales). These maps are then linearly rescaled to match the TE correlation from Planck and E-B assymetry. (Note that in the map we study here, there is no TE correlation, see here ).

This model naturally produces non-Gaussian dust patterns, but the decorrelation is ad-hoc, via some extrapolation at different frequencies using a pixel-dependent modified blackbody emission law. It is much stronger than in, say, model 01, which also have such extrapolation. I think this might be dut to the fact that PySM uses \(\beta_d\) and \(T_d\) maps from \(\texttt{Commander}\), whereas Flavien uses the same recipe as the FFP sims described here, which use the GNILC maps. Flavien made that plot , using only 1% of the pixels of the maps (Note that the reduction of the spread might be due to the resolution of the Commander map, which might have been smoothed?).

Note that in the current results, the bandwidth at 20GHz used in the ML search was the usual one used for other models (5GHz), whereas the simulations have been generated with a width of 6GHz.

Table 09:
Mean \(r \times 10^3\) and \(\sigma(r) \times 10^3\) from sets of 150 realizations with Vansyngel foreground model.
Decorrelation model \(A_L\) = 1 \(A_L\) = 0.3 \(A_L\) = 0.1 \(A_L\) = 0.03
Input \(r\) = 0
none2.856±2.5872.827±1.016 2.396±0.552 1.966±0.370
linear0.100±2.885-0.070±1.327-0.494±0.819-0.736±0.599
Input \(r\) = 0.003
none5.897±2.8075.687±1.2695.304±0.7754.973±0.570
linear3.493±3.0093.145±1.5152.772±1.0252.554±0.828

Appendix A: Table Results for CDT report/forecasting paper.

As can be seen in figure 4 of the last posting, we still have biases even in the case of the Gaussian foreground simulations, mostly for the foreground parameters. In this posting this last has been improved.

Just as for the CDT report, we remove this "algorithmic bias" to focus on the bias produced by the different dust simulations. We also chose to report results using the linear \(\ell\) dependence for the decorrelation model. See caption of Table 10. As we have seen in , the "algorithmic bias" on r is now in this case basically irrelevant. Note that we also improved the ones on the foreground parameters.

Table 10:
Bias on \(r\), obtained by subtracting the "algorithmic bias" to the mean of \(r\) for each of the 6 complex foreground models, for the case \(A_L\) = 0.1, assuming linear decorrelation for the Gaussian model. The "algorithmic bias" is estimated from the Gaussian simulations (-0.047±0.565)\(\times 10^{-3}\) in th r=0 case, and (2.954±0.811)\(\times 10^{-3}\) in the r=0.003 case (see Table 00). For this Gaussian foreground case, we report a bias based on the absolute value of the sample variance on the mean for \(\simeq 500\) sims, which acknowledges statistical limitations exist even for closed-loop tests calibrated by MC sims.
\(r\) bias \(\times 10^4\) \(\sigma(r) \times 10^4\)
Input \(r\) = 0
04.00 0.3 5.6
04.01 5.3 6.4
04.02 2.0 (typo corrected) 6.6
04.03 0.7 6.7
04.04 -5.8 8.1
04.05 -0.3 12.8
04.06 1.8 6.5
04.07 -0.1 6.0
04.08 24.5 8.4
04.09 -4.5 8.2
Input \(r\) = 0.003
04.00 0.4 8.1
04.01 5.5 8.5
04.02 2.1 8.0
04.03 1.1 8.7
04.04 -4.0 10.3
04.05 -0.5 14.1
04.06 2.6 8.6
04.07 1.0 8.8
04.08 25.5 10.9
04.09 -1.8 10.2