There have been many postings describing the optimization that went into the Science Book and subsequently in the Data Challenges. For a comprehensive list of such write-ups please take a look here. In this posting, I first describe in more detail how the HighRes side of the optimization is handled and derive delensing factors for some specific cases; I then take a look at the HighRes setups for DC2.0 and DC3.0 as well as a particular HighRes setup coming from Colin Hill (for SO) and compute delensing factors for these cases as well.
We have now gone through a few iterations of the the optimization. For each case, I produce a posting that describes the optimization, and one that offers analytic fitting parameters for constructing \(N_{l}\) curves that are fitted to the real \(N_l\)'s (scaled for appropriate effort levels) and lists the fraction of effort spent towards solving the foreground separation problem ("degree-scale effort") and reducing the lensing contribution ("arcmin-scale effort").
For the arcminute scale effort, as described here, to calculate an effective level of residual lensing, an experiment with \(1\) \(arcmin\) resolution, and mapping speed equivalent to the 145 LowRes channel was assumed. In particular, this procedure takes into account achieved on sky detector performance (yield, mapping speed, effective observing time, etc), but unlike the LowRes part, it does not account for mode filtering, which affects sky coverage and S/N per mode, or for noise contributions that take into account the non-uniformity of our surveys. Not taking into account these effects results into a uniform survey-weight and no reduction in the effective numbers of degrees-of-freedom due to filtering -- equivalently it results in deeper maps. One can argue wether this is the right thing to do when using LowRes information to inform a HighRes experiment. To quantify this effect, in addition to what was done for the Science Book, below I also look at taking filtering and survey non-uniformity into account for the HighRes experiment -- this gets the moniker "HighRes (+terms)."
Using an iterative estimator (code developed by Kimmy Wu), following the formalism in Smith et al. (arXiv: 1010.0048), a \(C_{\ell, res}/C_{\ell, lens}\) is calculated, the results of which are presented in this plot. PR stands for the experiment used for "phi/lensing reconstruction," and EM stands for the experiment (or combination of experiments) used for getting the E-modes. The combined map-depth of EM is assumed to be \(1 \mu K\)-arcmin, though we've seen before (from Kimmy) that the ratio of \(C_{\ell, res}/C_{\ell, lens}\) depends quite little on this noise, as seen here. The \(l_{min}\) in the plot is for the E/B inputs to \(\Phi.\) For all optimizations I assume \(l_{min,PR}=200\), \(l_{max}=4000\), and all cases assume complete E-mode coverage (i.e. \(l_{min,EM}=30\)) for the formation of the B-mode template. Practically, this is a scenario in which the arcmin-scale experiment may be noisy at low \(l\), but we can nonetheless measure all of the E-modes through this range to the level of precision required either with the arcmin-scale or degree-scale experiments. This complete E-mode map is then used to form a B-template by lensing these E-modes with the reconstructed \(\Phi\).
Below is a worked example for the case presented here.
Left: \(N_{l}\)'s for the Degree Scale effort that
went into Data Challenges 2.0 and 3.0, amounting to ~575k actual
det-yrs on \(f_{sky}=0.03\), as recorded here
Middle: \(N_{l}\)'s for the Arminute Scale effort that
went into the optimization for DC2.0 and DC3.0, amounting
to ~290k actual det-yrs (at 145 GHz) on \(f_{sky}=0.03\), as recorded here. For the data challenges, this effort was ultimately split into three
bands -- see next section for details. The high-res
details are recorded above.
In addition, I also add a 350k det-yr case for comparison with the plot
in the last section.
Right: \(N_{l}\)'s for the Arcminute Scale effort that went into
the the optimization for DC2.0 and DC3.0, taking into account filtering
and survey non-uniformity (described above). These curves are effectively
directly scaled-down versions of the yellow or black curves in the
left-most panel (after accounting for beam factors and the presence of
1/f noise in the LowRes panel).
HighRes | HighRes (+terms) | |
---|---|---|
290k det-yrs | \(A_L=0.09\) | \(A_L=0.13\) |
350k det-yrs | \(A_L=0.08\) | \(A_L=0.12\) |
For DC2.0 and DC3.0 the HighRes effort was split into three bands. The split was motivated by the LowRes optimization -- i.e. the \(f_{sky}=0.03\) entry in the table in this posting. This results in putting 60.0% of the effort at 95, 21.6% at 150, and 18.4% at 220 (after normalizing to 100%). That means 310,000 det-yrs (150-equiv) get divided into (186,000; 66,960; 57,040) det-yrs 150 equiv. This gives us (1.40, 2.19, 5.61) uK-arcmin at (95, 155, 220) for the HighRes part.
In addition to white noise levels, as we see in the formula in this posting,
we have an \(l_{knee}=200\) with slope \(\gamma=-2.0\) for 95 and \(\gamma=-3.0\) for 150 and 220. For this data challenge the beams were chosen by Clem to be
{6.1, 4.0, 2.7} arcmin for {95, 155, 220} respectively. I believe this is for a
roughly 3 meter aperture scaled from BICEP3. Below, I also consider a 5 meter
dish yielding resolutions of {2.8, 1.8, 1.2} arcmin (equivalent to the SO
resolutions for these frequencies).
These details are recorder in the S4 Experiment Definitions section.
HighRes, Small Dish | HighRes, Large Dish | |
---|---|---|
290k det-yrs | \(A_L=0.26\) | \(A_L=0.20\) |
Another set of HighRes inputs comes from the SO working group, as presented here. This posting assumes an optimized 5 meter dish given a particular choice of foreground model, frequency selection, and detector performance. The closest case presented is a case with 70k det-yrs x 5 yrs = 350k det-yrs, on \(f_{sky}=0.03\). The individual \(N_l\) curves are formed from the white noise and beam widths offered, while the blue solid curve is the combined raw noise level, and the black solid curve is the resulting foreground cleaned \(N_l\).
\(N_{l,raw}\) | \(N_{l,fg cleaned}\) | |
---|---|---|
350k det-yrs | \(A_L=0.17\) | \(A_L=0.23\) |