B. Racine

New round of selection and changes in priors. We now look at the effect of progressively releasing the priors on parameters, and we also impose a new cut on the bad simulations (outliers in map domain).
**Tables and default plots now show a case with only generous flat priors on all parameters.** Other tables with different priors are linked.

Results including the Chile (04b) and Pole (04c) masks are shown in this separate posting

This posting summarizes results from analysis of CMB-S4 Data Challenge 04
using a BICEP/Keck-style parametrized foreground model.

There are a few additional changes compared to the DC02 analysis:

• we fixed the lensing spectrum to the input one. In the previous posting it uses a lensing spectrum that differs from the input,

• we zero out the theory spectrum below ell=30, since that's what was done in the sims.

The current posting is a slight update over this previous DC04 posting, where we introduced models 7, 8 and 9.

In section 1, we show the main results in the form of figures and histograms including foreground parameters.

In section 2, we isolate the effect of cutting extra simulations based on an outlier detection at the map level.

In section 3, we report tables of r constraints for the different sky models, for different lensing residuals, with and without decorrelation in the ML search.

__Note about the model:__

In the former analyses, for each realization, we found the set of model parameters that maximizes the likelihood multiplied by priors on the dust and sync spectral index parameters (\(\beta_d\) and \(\beta_s\)).
These priors are based on Planck data, so they are quite weak in comparison with CMB-S4 sensitivity. However, in principle foreground models may violate them potentially leading to biases (e.g. model 03 where the preferred value of \(\beta_d\) is outside the prior range - see below, Figure 2).

** In the current analysis, we remove all the remaining parameter priors step-by-step:
**

- 1. Gaussian prior on \(\beta\)'s, see below, fixed \(A_L\)
- 2. flat priors on \(\beta\)'s: \(\beta_s\in[-4.5,-1.5]\), \(\beta_d\in[0.8,2.4]\), fixed \(A_L\)
- 3. flat priors on \(\beta\)'s, gaussian priors on \(A_L\), centered on the fiducial value, with \(\sigma\) 5% of that mean value.
- 4. flat priors on \(\beta\)'s, flat priors on \(A_L\): \(A_L\in[0,2]\).

The model includes the following parameters:

- \(r\): tensor-to-scalar ratio
- \(A_d\): \(BB\) power spectrum amplitude of dust, in \(\mu K_{CMB}^2\) units at \(\nu\)=353 GHz and \(\ell\)=80
- \(\beta_d\): dust emissivity spectral index; Gaussian prior on \(\beta_d\) is centered at 1.6 and has width 0.11
- \(A_s\): \(BB\) power spectrum amplitude of synchrotron, in \(\mu K_{CMB}^2\) units at \(\nu\)=23 GHz and \(\ell\)=80
- \(\beta_s\): synchrotron emissivity spectral index; Gaussian prior on \(\beta_s\) is centered at -3.1 and has width 0.3
- \(\alpha_d\): power law index for dust \(\mathcal{D}_\ell\) scaling in \(\ell\); limited to range [-2,2]
- \(\alpha_s\): power law index for synchrotron \(\mathcal{D}_\ell\) scaling in \(\ell\); limited to range [-2,2]
- \(\epsilon\): frequency-independent spatial correlation of dust and synchrotron; limited to range [-1,1]
- \(\Delta_d\): dust correlation between 217 and 353 GHz; not included when “Decorrelation model” is set to “none”
- \(A_L\): Amplitude of the residual lensing \(BB\) power spectrum, simple way to study how delensing would help the constraints on \(r\).

For the decorrelation model, we assume that the cross-spectrum of dust between frequencies \(\nu_1\) and \(\nu_2\) is reduced by factor \(\exp\{log(\Delta_d) \times [\log^2(\nu_1 / \nu_2) / \log^2(217 / 353)] \times f(\ell)\}\). For the \(\ell\) dependence we fix the scaling to take a linear form (pivot scale is \(\ell\)=80).

__Note about the simulations:__ (see section 3 for more detail.)

- Model 0 and 5 provide Gaussian realizations of dust and synchrotron with power law spectrum. These are "uniform" over the sky.
- Model 1, 2, 3, and 6 have a more realistic variation over the sky, but we use only a single realization of each.
- The same if true for model 4 - we use only a single realization.
- Model 7 incorporates a data driven variation of the dust amplitude over the sky (see here).
- Model 8 and 9 have a variation on the sky constrained by data, but including some 3D information (at the level of \(T_d\) or \(\beta_d\)-- model 8, or at the level of the magnetic field -- model 9).

In Figure 1, we summarize the r results, as well as the L=-log(Likelihood) values for different priors imposed on the \(\beta\) and \(A_L\) parameters.

Some of these models produce strong biases, especially model 8, which still has a very significant bias even when we take into account decorrelation in the model. See section 3, above table 8 for discussions.
It seems like using L=-log(Likelihood), we could barely detect that this model is a bad fit. Here are temporary plots showing the likelihood distribution. Model 9 has a weaker bias but its maximum likelihood distribution is off the model 00.

Comments on the figure:

- Opening the \(\beta\) prior only doesn't make a significant difference on any of these models, including the ones that had a \(\beta\)'s in tension with the imposed prior (model 4, 6 and 8, see next plot).
- Releasing the prior on \(A_L\) seems to reduce the small algorithmic bias on \(r\) for models 00 and 07. We will see in the next figure that it is at the cost of deviating from the input \(A_L\), which will be clearly visible in the bias plots.
- Opening the \(A_L\) prior shifts the recovered \(r\) a bit up, this is easier to see when looking at a \(A_L=1\) case
- Opening the priors seems to improve the fit for model 4, as can be seen when plotting the L=-log(Likelihood), whereas models 8 and 9 remain quite stable (and off).
- Related to the previous point, we see that model 4 had a bias level that is inversely proportional to the residual lensing (\(A_L\)), and this dependence is alleviated when we open the priors on \(A_L\). On the contrary, models 8 and 9 have a bias proportional to the lensing level, which doesn't seem to disappear when we open the priors. This will be more obvious in the plots of this posting.

Comments on the figure:

- For model 4, \(\beta_d\) is in slight tension with the imposed prior: Gaussian centered at 1.6 with width 0.11. It seems like in the case without \(\beta\) prior, the \(\beta\)'s are stable, and it doesn't change much for \(r\) but it has some effects on \(A_d\), and \(\alpha_d\) mostly and to a lesser extent on the synchrotron parameters.
- Still for model 4, while the \(\beta\) prior had a big effect on some foreground parameters but none on \(r\), removing the \(A_L\) prior changes \(r\) significantly without changing the foreground parameters. We reduce the bias on \(r\) but we have a strong one on the \(A_L\) parameter (it seems to an additional \(A_L \simeq 0.02\) bias, except for the \(A_L=1\) case).

- We also had tensions on \(\beta_s\) in model 6, (compared to the Gaussian centered at -3.1 with width 0.3), but here, opening the \(\beta\) prior doesn't change any parameter significantly. Similarly, opening the \(A_L\) prior doesn't change much.
- Model 8 also seem to prefer a higher value of \(\beta_d\), but opening the priors doesn't help much.
- So overall, it seems that except for model 4, all models have some slightly low recovered \(A_L\), which in turn correspond to a stronger bias on \(r\) when we free \(A_L\) compared to the case where we fix to the fiducial value. Model 4 goes the other way, with a large additional lensing signal recovered.
- When looking at the biases in \(\sigma\) for model 00 or 07, we see that even though opening the priors seems to reduce most parameter biases, we still have a really strong one for the \(\alpha\)'s and on the \(A_L\) parameters. (note that for model 7, the amplitudes of the foregrounds are boosted compared to the fiducial value we used here.)

In this previous posting, we introduced a likelihood cut to reject badly flawed simulations in models 00 and 07. Here we go a step further where instead of rejecting the ones with a bad fit, we reject the ones that are outliers at the map level. Clem just made a posting about this, here.

Similarly, here, I computed the standard deviation (and the mean) of the inputs maps of model 00 and used the same outlier detection algorithm I used for the likelihood outliers, based on the modified Z-score method (with a threshold of 5, see here). We find 5 more outliers with this method, not sure why.

The list is here (in bold the ones that were already detected at likelihood cut, in italic the oned that Clem's method didn't pick up):

- total Q or U std outliers: 40
[
**56***111***130**131**135 143**152*165***206***273*275 309 315 323 347 357 367 372 387**388 410 443 452***468*477**480**497**512 577 579 596 620**632**704**743 782**907**925**935***957*] - gdust has Q or U std outliers: 18 [ 56 111 165 273 309 315 323 372 443 452 468 477 577 596 620 907 935 957]
- gsync has Q or U std outliers: 22 [130 131 135 143 152 206 275 347 357 367 387 388 410 480 497 512 579 632 704 743 782 925]

Same comments as in Clem's posting: The complete list of bad realizations with zero based indexing (same as the filenames) is above. These need to be screened out of all .00 re-analysis results so far, and also .07 since this is built from the same maps. Model .05 is built from the alms and is not effected as we can see here. Before any future round of sims these maps will be regenerated to properly fix the problem.

Comments on the figures:

- Note that we here compare to the case of the previous' posting result, where we had no prior on beta, and a Gaussian prior on \(A_L\).
- The new cut on bad simulations has a non significant effect on our results, but it tends to reduce the biases for all parameters.

In the following tables, we report the \(r\) results for the case where all the parameters have generous flat priors.

For the other cases, see the tables in the following links:

Gaussian priors on \(\beta\)'s, fixed \(A_L\)

free \(\beta\)'s, fixed \(A_L\)

free \(\beta\)'s, Gaussian 5% prior on \(A_L\)

free \(\beta\)'s, free \(A_L\) (as below)

The mean values and standard deviations of \(r\) for simulations with simple Gaussian foregrounds are summarized in Table 00. With a 10% lensing residual, we don't quite achieve \(\sigma(r) = 5 \times 10^{-3}\) for sims with \(r = 0\).

Turning on dust decorrelation in the model doesn't cause any bias in \(r\) and the recovered \(\Delta_d\) values are centered around 1 (i.e. analysis recovers zero decorrelation). Adding this parameter does increase \(\sigma(r)\) somewhat.

Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|

Input \(r\) = 0 | ||||

none | 0.104±2.687 | 0.033±0.977 | 0.013±0.480 | 0.003±0.300 |

linear | 0.062±2.717 | 0.001±1.045 | -0.013±0.573 | -0.017±0.406 |

Input \(r\) = 0.003 | ||||

none | 3.097±2.767 | 3.019±1.140 | 3.009±0.654 | 3.013±0.475 |

linear | 3.111±2.922 | 3.022±1.311 | 3.008±0.808 | 3.013±0.609 |

The fiducial model used for this model are in the following table.

r | \(A_d\) | \(\beta_d\) | \(A_s\) | \(\beta_s\) | \(\alpha_d\) | \(\alpha_s\) | \(\epsilon\) | \(\Delta_d\) |

0/0.003 | 4.25 \(\mu K^2\) | 1.6 | 3.8 \(\mu K^2\) | -3.1 | -0.4 | -0.6 | 0 | 0 |

As has been previously noted, dust power is much higher in this model (\(A_d \sim 12.5 \mu K^2\)) than for the Gaussian foreground sims (\(A_d = 4.25 \mu K^2\)). The PySM d1 dust model does feature a spatially varying spectral index, but we don't find any detectable decorrelation in this analysis. The PySM s1 synchrotron model yields \(A_s \sim 0.5 \mu K^2\) and there is \(\sim 6\)% correlation between dust and sync.

Note here that the value of \(\sigma(r)\) doesn't change much compared to the 04.00 case despite having a higher dust level. This is probably due to the fact that we only have one realization of the foreground sky (see note in the introduction), thus no impact from cosmic variance.

Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|

Input \(r\) = 0 | ||||

none | 1.176±2.745 | 0.926±1.009 | 0.729±0.505 | 0.579±0.324 |

linear | 0.990±2.826 | 0.725±1.130 | 0.501±0.637 | 0.343±0.451 |

Input \(r\) = 0.003 | ||||

none | 4.150±2.827 | 3.879±1.189 | 3.715±0.706 | 3.613±0.526 |

linear | 3.993±2.949 | 3.717±1.338 | 3.541±0.847 | 3.438±0.657 |

The d4 version of PySM dust adds a second dust component (with different blackbody temperature and emissivity power law) based on Meisner & Finkbeiner (2014). Not sure what type of \(\beta_d\) spatial variations are included in this model, but Colin thinks it is more or less the same as for d1. The s3 synchrotron model adds curvature to the synchrotron spectral index: \(\beta_s \rightarrow \beta_s + C \ln (\nu / \nu_C)\). The a2 AME model uses a 2% polarization fraction for AME, which seems very high, but there is no attempt to model AME in this analysis.

Results for this model show that \(A_d\) is even larger (\(\sim 32.5 \mu K^2\)) than for the d1 dust model. The mean value of \(\beta_d\) decreases from 1.59 (for PySM d1 model) to 1.55, which is probably a sign of the two component dust. The mean value of \(\beta_s\) decreases from -3.05 (for PySM s1 model) to -3.13, which is probably due to synchrotron spectral curvature (and perhaps polarized AME?). Dust–sync correlation is higher, at \(\sim 10\)%, which could be from polarized AME.

Note here that the value of \(\sigma(r)\) doesn't change much compared to the 04.00 case despite having a much higher dust level. This is probably due to the fact that we only have one realization of the foreground sky (see note in the introduction), thus no impact from cosmic variance.

Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|

Input \(r\) = 0 | ||||

none | 0.608±2.762 | 0.468±1.032 | 0.390±0.516 | 0.336±0.323 |

linear | 0.273±2.877 | 0.181±1.174 | 0.157±0.653 | 0.141±0.448 |

Input \(r\) = 0.003 | ||||

none | 3.608±2.725 | 3.470±1.148 | 3.401±0.690 | 3.363±0.518 |

linear | 3.307±2.783 | 3.206±1.243 | 3.176±0.788 | 3.165±0.611 |

The next PySM version uses the Hensley/Draine dust model, which has additional complexity in the dust SED (perhaps described in arXiv:1709.07897?). The level of dust power is similar to sky model 01 (PySM d1 model), but we find that the emissivity power law is even flatter than the last case, with \(\beta_d \sim 1.44\).

The recovered means seem quite wacky, and \(A_L\) dependent.

Note here that the value of \(\sigma(r)\) doesn't change much compared to the 04.00 case despite having a higher dust level. This is probably due to the fact that we only have one realization of the foreground sky (see note in the introduction), thus no impact from cosmic variance.

Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|

Input \(r\) = 0 | ||||

none | 0.872±2.726 | 0.754±1.004 | 0.639±0.515 | 0.534±0.340 |

linear | -0.019±2.847 | -0.015±1.162 | -0.001±0.671 | -0.011±0.482 |

Input \(r\) = 0.003 | ||||

none | 4.180±2.817 | 3.871±1.181 | 3.702±0.703 | 3.593±0.527 |

linear | 3.309±2.890 | 3.104±1.327 | 3.049±0.873 | 3.025±0.696 |

The Ghosh dust model (described here) is based on GASS HI data with a model for the Galactic magnetic field. For these sims, it is combined with the PySM a2, f1, and s3 components (same as the two previous models).

The analysis of this model yields smaller still values of \(\beta_d \sim 1.3-1.4\). Dust-sync correlation is still present, but smaller (2–3%), which is probably due to the fact that the Ghosh dust sims don't know anything about the PySM synchrotron or AME components. The fact that they are correlated at all probably happens because both models are based on data at larger scales.

Dust decorrelation is small in absolute terms, but detected at high significance. Using a model without dust decorrelation leads to a large positive bias on \(r\) in the range \(4-5 \times 10^{-3}\). Dust decorrelation with linear \(\ell\) scaling produces the smallest biases, but still quite large compared to other sky models.

Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|

Input \(r\) = 0 | ||||

none | 1.410±2.870 | 1.491±1.220 | 1.527±0.721 | 1.536±0.521 |

linear | -1.460±2.951 | -1.110±1.324 | -0.961±0.835 | -0.861±0.651 |

Input \(r\) = 0.003 | ||||

none | 4.814±3.160 | 4.757±1.455 | 4.783±0.942 | 4.798±0.755 |

linear | 1.744±3.181 | 1.976±1.532 | 2.118±1.036 | 2.221±0.851 |

This model has extremely large dust decorrelation (15% between 217 and 353 GHz at \(\ell\) = 80) and it exactly follows the assumed functional form of decorrelation with linear \(\ell\) scaling, so we can still draw some useful conclusions.

When we choose decorrelation with linear \(\ell\) scaling to match the sims, then we find no bias on \(r\) and recover \(\Delta_d\) = 0.85.

An important point to note from this model is that, even for the unbiased case where the decorrelation is correctly modeled in both \(\nu\) and \(\ell\), we find \(\sigma(r) \sim 1.4\), much larger than the target sensitivity of CMB-S4. This shows that, for extreme levels of foreground decorrelation, we lose the ability to clean foregrounds from the maps because the foreground modes are significantly independent between the various CMB-S4 frequencies. Regardless of whether you are doing map-based cleaning or fitting the power spectra as we do here, the only way to improve sensitivity would be use more observing bands that are more closely spaced. It also makes the point that our Fisher forecasts should assume some non-zero level of decorrelation. Adding decorrelation as a free parameter to a forecast that assumes \(\Delta_d = 1\) only captures part of the statistical penalty.

Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|

Input \(r\) = 0 | ||||

none | 34.336±6.863 | 26.576±4.369 | 23.450±2.698 | 23.144±2.513 |

linear | 0.182±3.665 | 0.036±2.013 | 0.006±1.511 | -0.013±1.318 |

Input \(r\) = 0.003 | ||||

none | 37.838±7.308 | 30.288±4.521 | 26.711±2.790 | 26.610±2.574 |

linear | 3.020±3.835 | 3.029±2.186 | 3.054±1.689 | 3.047±1.524 |

Our understanding is that this model uses MHD simulations to consistently model polarized dust and synchrotron in the Galactic magnetic field. This makes it quite interesting that this analysis finds negative dust-sync correlation with \(\epsilon \sim -0.36\). The dust power is similar to the Gaussian sims, and \(\beta_d\) matches the Planck value of 1.59. This analysis finds a synchrotron SED power law that is much flatter than usual, \(\beta_s \sim -2.6\), which is inconsistent with the prior at about \(1.5 \sigma\).

This model does not show any significant dust decorrelation. In general, the results for this model look nearly as good as the simple Gaussian foregrounds (sky model 00).

Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|

Input \(r\) = 0 | ||||

none | 0.166±2.655 | 0.091±1.016 | 0.059±0.537 | 0.045±0.351 |

linear | 0.300±2.704 | 0.211±1.120 | 0.159±0.661 | 0.129±0.477 |

Input \(r\) = 0.003 | ||||

none | 3.311±3.064 | 3.178±1.290 | 3.126±0.741 | 3.098±0.527 |

linear | 3.456±3.110 | 3.313±1.386 | 3.244±0.859 | 3.203±0.655 |

This model is described here. It is a modified version of model 00, where the brightness of the dust varies across the sky. It does not include any decorrelation. It was mostly developed to study the effect of mask variations, shown in this separate posting.

Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|

Input \(r\) = 0 | ||||

none | -0.005±2.653 | 0.005±0.961 | 0.003±0.466 | -0.003±0.287 |

linear | -0.093±2.725 | -0.046±1.081 | -0.028±0.598 | -0.024±0.417 |

Input \(r\) = 0.003 | ||||

none | 3.318±2.718 | 3.103±1.169 | 3.040±0.707 | 3.016±0.525 |

linear | 3.407±2.863 | 3.156±1.353 | 3.073±0.888 | 3.040±0.698 |

Model 8 has been developed by Martinez-Solaeche, Karakci, Delabrouille, (see this paper). As described in their abstract: This is a "*three-dimensional model of polarised galactic dust emission that takes into account the variation of the dust density, spectral index and temperature along the line of sight, and contains randomly generated small scale polarisation fluctuations. The model is constrained to match observed dust emission on large scales, and match on smaller scales extrapolations of observed intensity and polarisation power spectra.*". It is based on a multi-layer model where \(T_d\), \(\beta_d\) and the optical depth \(\tau\) is defined in each layer, constrained by Planck, IRAS, some 3D dust extinction maps. A simple model of the galactic magnetic field is used to generate the large scale polarization. For the small scales, some Gaussian random I, E and B based on Planck observed power spectra are generated in each layers. It is then extrapolated at different frequencies, based on random realizations of \(\beta_d\) and \(T_d\) in the different layers, defining the dust SED.

This model naturally produces dust decorrelation, due to a varying SED on the sky. It is also expected to produce a flattening at low frequency, as is briefly reported in figure 19 of the paper. This might be explaining the large bias we observe on r, reduced but still present at high significance when including a decorrelation parameter.

Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|

Input \(r\) = 0 | ||||

none | 6.391±2.942 | 5.310±1.180 | 4.399±0.661 | 3.687±0.457 |

linear | 4.446±3.033 | 3.428±1.333 | 2.612±0.844 | 2.015±0.648 |

Input \(r\) = 0.003 | ||||

none | 9.271±2.880 | 8.130±1.366 | 7.288±0.893 | 6.675±0.688 |

linear | 7.486±2.981 | 6.441±1.547 | 5.753±1.097 | 5.286±0.894 |

Model 9 has been developed by Vansyngel et al. (see this paper, and this posting). In this model, each layer has the same intensity (constrained by the Planck intensity map), but different magnetic field realizations. It produces Q and U maps by integrating along the LOS over these multiple layers of magnetic fields. This magnetic field, contrary to the previous model, is simulated down to small turbulent scales, which produce more physically motivated non-Gaussian fluctuations in the maps (down to small scales). These maps are then linearly rescaled to match the TE correlation from Planck and E-B asymetry. (Note that in the map we study here, there is no TE correlation, see here ).

This model naturally produces non-Gaussian dust patterns, but the decorrelation is ad-hoc, via some extrapolation at different frequencies using a pixel-dependent modified blackbody emission law. It is much stronger than in, say, model 01, which also have such extrapolation. I think this might be dut to the fact that PySM uses \(\beta_d\) and \(T_d\) maps from \(\texttt{Commander}\), whereas Flavien uses the same recipe as the FFP sims described here, which use the GNILC maps. Flavien made this plot, using only 1% of the pixels of the maps (Note that the reduction of the spread might be due to the resolution of the Commander map, which might have been smoothed?).

**Note that in the current results, the bandwidth at 20GHz used in the ML search was the usual one used for other models (5GHz), whereas the simulations have been generated with a width of 6GHz.**

Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|

Input \(r\) = 0 | ||||

none | 14.917±2.238 | 13.291±1.191 | 12.503±0.857 | 12.165±0.728 |

linear | 8.009±2.432 | 6.542±1.394 | 5.999±1.078 | 5.816±0.969 |

Input \(r\) = 0.003 | ||||

none | 18.046±2.138 | 16.414±1.273 | 15.601±0.969 | 15.241±0.847 |

linear | 11.119±2.493 | 9.648±1.588 | 9.080±1.244 | 8.874±1.098 |

As can be seen in figure 4 of the last posting, we still have biases even in the case of the Gaussian foreground simulations, mostly for the foreground parameters.

Just as for the CDT report, we remove this "algorithmic bias" to focus on the bias produced by the different dust simulations. We also chose to report results using the linear \(\ell\) dependence for the decorrelation model. See caption of Table 10. As we have seen in

The other cases with different biases on the parameters are available in the links at the beginning of section 3

\(r\) bias \(\times 10^4\) | \(\sigma(r) \times 10^4\) | \(r\) bias \(\times 10^4\) | \(\sigma(r) \times 10^4\) | |

No decorr | Linear decorr | |||
---|---|---|---|---|

r=0 | ||||

04.00 | 0.2 | 6.5 | 0.7 | 10.5 |

04.01 | 7.2 | 5.0 | 5.1 | 6.4 |

04.02 | 3.8 | 5.2 | 1.7 | 6.5 |

04.03 | 6.3 | 5.2 | 0.1 | 6.7 |

04.04 | 15.1 | 7.2 | -9.5 | 8.4 |

04.05 | 234.4 | 27.0 | 0.2 | 15.1 |

04.06 | 0.5 | 5.4 | 1.7 | 6.6 |

04.07 | -0.1 | 4.7 | -0.2 | 6.0 |

04.08 | 43.9 | 6.6 | 26.2 | 8.4 |

04.09 | 25.6 | 5.7 | -4.3 | 8.2 |

\(r\) bias \(\times 10^4\) | \(\sigma(r) \times 10^4\) | \(r\) bias \(\times 10^4\) | \(\sigma(r) \times 10^4\) | |

No decorr | Linear decorr | |||
---|---|---|---|---|

r=0.003 | ||||

04.00 | 0.7 | 8.2 | 0.9 | 10.5 |

04.01 | 7.1 | 7.1 | 5.3 | 8.5 |

04.02 | 3.9 | 6.9 | 1.7 | 7.9 |

04.03 | 6.9 | 7.0 | 0.4 | 8.7 |

04.04 | 17.7 | 9.4 | -8.9 | 10.4 |

04.05 | 237.0 | 27.9 | 0.5 | 16.9 |

04.06 | 1.2 | 7.4 | 2.4 | 8.6 |

04.07 | 0.3 | 7.1 | 0.6 | 8.9 |

04.08 | 42.8 | 8.9 | 27.4 | 11.0 |

04.09 | 24.6 | 8.2 | -2.3 | 10.5 |