B. Racine
This posting reports the ML search on the DC04, DC04b and DC04c, for all available models, with varying levels on priors on the \(\beta\)'s and \(A_L\) parameters.
More detailed results focusing on mask 04 are shown in this separate posting
This posting summarizes results from analysis of CMB-S4 Data Challenge 04b and 04c using a BICEP/Keck-style parametrized foreground model. It is an update on this posting, where we remove the priors on the \(\beta\)'s and \(A_L\) parameters.
The method is analogous to this posting, which reports DC4 results for model 00 to 09, where we introduced a new cut on the bad simulations based on map std outliers, which didn't have a significant impact.
In the current posting, we analyze simulations with nominal Chile (DC04b) and Pole (DC04c) masks, as described here, which we compare to the previous circular idealized f_sky 3% mask from the CDT report (DC04). More information about DC4 can be found in the Experiment Definition page.
We are using models 0, 1, 2, 3, 7, 8 and 9, that are described with a bit more details here.
In section 1, we show the main results concerning the "r" results, and their dependence on \(A_L\)
In section 2, we show the ML parameters' distributions, including foreground parameters, in the form of histograms.
In section 3, we report tables of r constraints for different sky models, masks, lensing residuals, and with and without decorrelation in the ML search.
Note about the Model:
In the former analyses, for each realization, we found the set of model parameters that maximizes the likelihood multiplied by priors on the dust and sync spectral index parameters (\(\beta_d\) and \(\beta_s\)).
These priors are based on Planck data, so they are quite weak in comparison with CMB-S4 sensitivity. However, in principle foreground models may violate them potentially leading to biases (e.g. DC4 model 03 where the preferred value of \(\beta_d\) is outside the prior range - see this posting, Figure 2).
In the current analysis, we remove all the remaining parameter priors step-by-step:
The model includes the following parameters:
For the decorrelation model, we assume that the cross-spectrum of dust between frequencies \(\nu_1\) and \(\nu_2\) is reduced by factor \(\exp\{log(\Delta_d) \times [\log^2(\nu_1 / \nu_2) / \log^2(217 / 353)] \times f(\ell)\}\). For the \(\ell\) dependence we fix the scaling to take a linear form (pivot scale is \(\ell\)=80).
Figure 1 shows how the constraints on r evolve for the different observation masks, for different sky models. Figure 2 shows the evolution of \(\sigma(r)\) as a function of residual lensing \(A_L\) for the 3 masks.
Comments on Figure 1:
In figure 2, we show the evolution of \(\sigma(r)\) as a function of \(A_L\). We also do a linear fit that we report in the legend, and show as a faded dashed line, which can be used to estimate the \(\sigma(r)\) at other \(A_L\) under that assumption.
Comments on Figure 2:
In figure 3, we report the full distribution of the ML parameters, in the form of histograms.
Comments on figure 3 (More comments in the case of DC04 can be seen in this posting):
In the following tables, we report the \(r\) results for the case where all the parameters have generous flat priors.
For the other cases, see the tables in the following links:
Gaussian priors on \(\beta\)'s, fixed \(A_L\)
free \(\beta\)'s, fixed \(A_L\)
free \(\beta\)'s, Gaussian 5% prior on \(A_L\)
free \(\beta\)'s, free \(A_L\) (as below)
The mean values and standard deviations of \(r\) for simulations with simple Gaussian foregrounds are summarized in Table 00, Table 00b and Table 00c, respectively for the circular 3% mask, the nominal Chile mask and the nominal Pole mask.
Figure 1 shows these results in a plot.
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 0.104±2.687 | 0.033±0.977 | 0.013±0.480 | 0.003±0.300 |
linear | 0.062±2.717 | 0.001±1.045 | -0.013±0.573 | -0.017±0.406 |
Input \(r\) = 0.003 | ||||
none | 3.097±2.767 | 3.019±1.140 | 3.009±0.654 | 3.013±0.475 |
linear | 3.111±2.922 | 3.022±1.311 | 3.008±0.808 | 3.013±0.609 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 0.527±1.830 | 0.293±0.913 | 0.193±0.640 | 0.157±0.538 |
linear | 0.470±2.020 | 0.245±1.094 | 0.154±0.824 | 0.124±0.733 |
Input \(r\) = 0.003 | ||||
none | 3.647±1.866 | 3.382±1.034 | 3.265±0.739 | 3.222±0.614 |
linear | 3.551±2.138 | 3.278±1.254 | 3.173±0.919 | 3.140±0.774 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 0.023±3.645 | 0.007±1.317 | 0.022±0.609 | 0.022±0.331 |
linear | -0.062±3.757 | -0.044±1.440 | -0.006±0.728 | 0.007±0.441 |
Input \(r\) = 0.003 | ||||
none | 3.502±3.589 | 3.116±1.442 | 3.038±0.771 | 3.013±0.530 |
linear | 3.470±3.726 | 3.101±1.645 | 3.036±0.979 | 3.021±0.705 |
For more details about this model, see this previous posting
Figure 1 shows these results in a plot.
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 1.176±2.745 | 0.926±1.009 | 0.729±0.505 | 0.579±0.324 |
linear | 0.990±2.826 | 0.725±1.130 | 0.501±0.637 | 0.343±0.451 |
Input \(r\) = 0.003 | ||||
none | 4.150±2.827 | 3.879±1.189 | 3.715±0.706 | 3.613±0.526 |
linear | 3.993±2.949 | 3.717±1.338 | 3.541±0.847 | 3.438±0.657 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 2.450±2.041 | 1.970±1.041 | 1.783±0.745 | 1.717±0.634 |
linear | 1.824±2.176 | 1.303±1.206 | 1.118±0.918 | 1.059±0.812 |
Input \(r\) = 0.003 | ||||
none | 5.637±2.054 | 5.084±1.162 | 4.857±0.847 | 4.768±0.716 |
linear | 5.004±2.344 | 4.414±1.421 | 4.202±1.070 | 4.130±0.921 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 1.808±3.740 | 1.334±1.356 | 0.860±0.664 | 0.564±0.404 |
linear | 1.501±3.824 | 1.056±1.443 | 0.576±0.747 | 0.276±0.474 |
Input \(r\) = 0.003 | ||||
none | 5.072±3.678 | 4.402±1.526 | 3.957±0.854 | 3.695±0.600 |
linear | 4.786±3.727 | 4.148±1.647 | 3.714±0.999 | 3.459±0.739 |
For more details about this model, see this previous posting
Figure 1 shows these results in a plot.
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 0.608±2.762 | 0.468±1.032 | 0.390±0.516 | 0.336±0.323 |
linear | 0.273±2.877 | 0.181±1.174 | 0.157±0.653 | 0.141±0.448 |
Input \(r\) = 0.003 | ||||
none | 3.608±2.725 | 3.470±1.148 | 3.401±0.690 | 3.363±0.518 |
linear | 3.307±2.783 | 3.206±1.243 | 3.176±0.788 | 3.165±0.611 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 0.583±2.090 | 0.565±1.018 | 0.529±0.704 | 0.516±0.590 |
linear | -0.338±2.215 | -0.112±1.180 | -0.009±0.874 | 0.041±0.768 |
Input \(r\) = 0.003 | ||||
none | 3.754±2.127 | 3.666±1.125 | 3.598±0.794 | 3.568±0.665 |
linear | 2.781±2.411 | 2.920±1.384 | 2.999±1.019 | 3.042±0.868 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 0.789±3.780 | 0.504±1.358 | 0.327±0.629 | 0.225±0.350 |
linear | 0.512±3.863 | 0.257±1.442 | 0.131±0.708 | 0.066±0.421 |
Input \(r\) = 0.003 | ||||
none | 4.025±3.728 | 3.547±1.505 | 3.352±0.814 | 3.259±0.566 |
linear | 3.749±3.766 | 3.288±1.607 | 3.132±0.941 | 3.073±0.683 |
For more details about this model, see this previous posting
Figure 1 shows these results in a plot.
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 0.872±2.726 | 0.754±1.004 | 0.639±0.515 | 0.534±0.340 |
linear | -0.019±2.847 | -0.015±1.162 | -0.001±0.671 | -0.011±0.482 |
Input \(r\) = 0.003 | ||||
none | 4.180±2.817 | 3.871±1.181 | 3.702±0.703 | 3.593±0.527 |
linear | 3.309±2.890 | 3.104±1.327 | 3.049±0.873 | 3.025±0.696 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 1.311±2.015 | 1.206±1.018 | 1.110±0.728 | 1.067±0.627 |
linear | -1.363±2.191 | -0.837±1.237 | -0.594±0.964 | -0.480±0.874 |
Input \(r\) = 0.003 | ||||
none | 4.480±2.098 | 4.306±1.189 | 4.176±0.871 | 4.113±0.739 |
linear | 1.744±2.459 | 2.171±1.508 | 2.387±1.155 | 2.491±1.004 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 0.960±3.867 | 0.716±1.410 | 0.516±0.670 | 0.365±0.395 |
linear | 0.204±4.002 | 0.044±1.532 | -0.035±0.776 | -0.092±0.487 |
Input \(r\) = 0.003 | ||||
none | 4.264±3.709 | 3.784±1.521 | 3.560±0.846 | 3.423±0.595 |
linear | 3.523±3.808 | 3.100±1.685 | 2.974±1.024 | 2.918±0.760 |
The mean values and standard deviations of \(r\) for simulations with amplitude modulated Gaussian foregrounds are summarized in Table 07, Table 07b and Table 07c, respectively for the circular 3% mask, the nominal Chile mask and the nominal Pole mask.
Figure 1 shows these results in a plot.
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | -0.005±2.653 | 0.005±0.961 | 0.003±0.466 | -0.003±0.287 |
linear | -0.093±2.725 | -0.046±1.081 | -0.028±0.598 | -0.024±0.417 |
Input \(r\) = 0.003 | ||||
none | 3.318±2.718 | 3.103±1.169 | 3.040±0.707 | 3.016±0.525 |
linear | 3.407±2.863 | 3.156±1.353 | 3.073±0.888 | 3.040±0.698 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 0.497±1.842 | 0.289±0.927 | 0.200±0.654 | 0.167±0.551 |
linear | 0.351±2.024 | 0.177±1.117 | 0.111±0.856 | 0.090±0.769 |
Input \(r\) = 0.003 | ||||
none | 3.658±1.895 | 3.387±1.051 | 3.269±0.753 | 3.223±0.629 |
linear | 3.447±2.249 | 3.213±1.364 | 3.131±1.024 | 3.109±0.874 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 0.021±3.621 | 0.006±1.309 | 0.022±0.607 | 0.022±0.332 |
linear | -0.078±3.761 | -0.053±1.457 | -0.009±0.742 | 0.006±0.451 |
Input \(r\) = 0.003 | ||||
none | 3.505±3.598 | 3.125±1.451 | 3.044±0.780 | 3.015±0.535 |
linear | 3.462±3.737 | 3.103±1.660 | 3.042±0.999 | 3.029±0.727 |
For more details about this model, see this previous posting
Figure 1 shows these results in a plot.
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 6.391±2.942 | 5.310±1.180 | 4.399±0.661 | 3.687±0.457 |
linear | 4.446±3.033 | 3.428±1.333 | 2.612±0.844 | 2.015±0.648 |
Input \(r\) = 0.003 | ||||
none | 9.271±2.880 | 8.130±1.366 | 7.288±0.893 | 6.675±0.688 |
linear | 7.486±2.981 | 6.441±1.547 | 5.753±1.097 | 5.286±0.894 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 14.917±2.238 | 13.291±1.191 | 12.503±0.857 | 12.165±0.728 |
linear | 8.009±2.432 | 6.542±1.394 | 5.999±1.078 | 5.816±0.969 |
Input \(r\) = 0.003 | ||||
none | 18.046±2.138 | 16.414±1.273 | 15.601±0.969 | 15.241±0.847 |
linear | 11.119±2.493 | 9.648±1.588 | 9.080±1.244 | 8.874±1.098 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 9.937±4.218 | 7.602±1.691 | 5.479±0.886 | 3.941±0.549 |
linear | 6.008±4.434 | 4.136±1.952 | 2.320±1.119 | 1.181±0.728 |
Input \(r\) = 0.003 | ||||
none | 12.752±4.114 | 10.388±1.807 | 8.421±1.030 | 7.036±0.717 |
linear | 8.920±4.193 | 7.102±2.053 | 5.551±1.309 | 4.532±0.957 |
For more details about this model, see this previous posting
Figure 1 shows these results in a plot.
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 3.831±2.732 | 3.192±1.055 | 2.568±0.573 | 2.051±0.390 |
linear | 0.698±3.010 | 0.068±1.342 | -0.439±0.824 | -0.715±0.608 |
Input \(r\) = 0.003 | ||||
none | 6.760±2.890 | 6.025±1.319 | 5.464±0.821 | 5.044±0.613 |
linear | 3.997±3.139 | 3.237±1.567 | 2.776±1.050 | 2.524±0.840 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 19.884±2.431 | 17.656±1.449 | 16.630±1.114 | 16.189±0.968 |
linear | 10.066±2.542 | 8.206±1.568 | 7.465±1.258 | 7.174±1.145 |
Input \(r\) = 0.003 | ||||
none | 22.840±2.217 | 20.737±1.381 | 19.777±1.102 | 19.372±0.993 |
linear | 12.916±2.703 | 11.174±1.830 | 10.499±1.490 | 10.243±1.335 |
Decorrelation model | \(A_L\) = 1 | \(A_L\) = 0.3 | \(A_L\) = 0.1 | \(A_L\) = 0.03 |
---|---|---|---|---|
Input \(r\) = 0 | ||||
none | 6.648±4.070 | 4.759±1.572 | 3.111±0.799 | 1.950±0.480 |
linear | 4.070±4.392 | 2.103±1.877 | 0.482±1.007 | -0.357±0.599 |
Input \(r\) = 0.003 | ||||
none | 9.608±4.042 | 7.636±1.718 | 6.168±0.943 | 5.189±0.639 |
linear | 7.136±4.277 | 5.158±2.007 | 3.763±1.216 | 2.976±0.861 |