In this posting, similar to this previous analysis (over just 70 sims), I use our spectral-based analysis framework to arrive at parameter
constraints derived from the DC 01.xx sim-sets, and compare the results to the
CMB-S4 science book contraints (which were based on scaling the BICEP/Keck
covariance matrix), as well as a new Fisher calculation which uses a BPCM derived from the DC 01.xx sims.

Notes on the method:

- We use the DC 01.xx sim-sets (cmb + dust + sync + noise)
- Use the BICEP/Keck multi-component spectral-based likelihood framework.
- Use a pure-B estimator (J. Grain et al, Phys. Rev. D. 79,12315) to calculate all the auto and cross spectra.
- Use a global ML peak search (of similar dimensionality as in the Science Book Fisher forecasts, bar a dust decorrelation parameter) to obtain recovered ML histograms. The standard deviations of histograms offer a measure of the constraining power in our dataset. The means of histograms offer a measure of bias.

Differences from the previous analysis:

- In the previous analysis, the bpwf included response below \(l<30\), but our data challenge maps were explicitly constructed without signal or noise power below \(l=30\). So if one would have naively used these window functions, the model expectation values would have predicted significantly more power in the first bin than we saw from the sims. To fix this, Colin suggested zeroing the bandpower window functions below \(l=30\), renormalizing them, and folding that normalization factor into the suppression factors. One also then applied that normalization factor to all of the sim bandpowers.
- Clem said that he did not like this solution, and that the proper place to
account for this truncation was in the theory model spectra. This current
analysis does just that by zero-ing the theory spectra below \(l<30\). The
functions that were changed are:
`like_getexpvals.m`and`model_rms.m`.

**Note on the mean of the \(r_{ML}\) distributions:**
Compared to the previous analysis (over just 70 sims), the biases
appear to have gotten larger for \(A_L=0.1\) or larger. This is a bit puzzling,
since there should be only a difference in technique. If I compare the recovered
ML peaks for r for the first 70 realizations (for the \(A_L=1.0\) case) between
the current analysis and the previous one, I see the following histogram.

**Note on the width of the \(r_{ML}\)distributions:**
The results of the pager above are summarized in the table below under the DC01.00
column. As mentioned, I compare the results to the CMB-S4 science book contraints, based on scaling the BICEP/Keck covariance matrix (under the Fisher, BK scaled
column), and perform a new Fisher calculation which uses a BPCM derived from the
DC 01.00 sims (Fisher, DC 01.00). In addition to this, I also add the values
obtained from the previous analysis (over just 70 sims).
We expect the DC 01.00 constraints to be more
optimistic than the Science Book results due to idealized nature of the
simulations, but we expect good agreement between the DC 01.00 contraints and
Fisher DC 01.00 up to sample variance (we have 1000 sims, thus
\(\sqrt{2/1000}\sigma=0.045 \sigma\)).

\(f_{sky}=0.03\) | Fisher (BK scaled) | DC 01.00 (#70) | Fisher (DC 01.00, #70) | DC 01.00 | Fisher (DC 01.00) | DC 01.01 |
---|---|---|---|---|---|---|

\(\sigma_r(r=0, A_L=1.00), \times 10^{-3}\) | 3.82 | 2.61 | 2.63 | 2.75 | 2.41 | 2.73 |

\(\sigma_r(r=0, A_L=0.30), \times 10^{-3}\) | --- | 1.13 | 1.03 | 1.12 | 1.01 | 1.12 |

\(\sigma_r(r=0, A_L=0.10), \times 10^{-3}\) | 0.91 | 0.67 | 0.56 | 0.62 | 0.59 | 0.62 |

\(\sigma_r(r=0, A_L=0.03), \times 10^{-3}\) | --- | 0.46 | 0.38 | 0.44 | 0.44 | 0.47 |