There have been many postings describing the optimization that went into the Science Book and subsequently in the Data Challenges. For a comprehensive list of such write-ups please take a look here. In this posting, I first describe in more detail how the HighRes side of the optimization is handled and derive delensing factors for some specific cases; I then take a look at the HighRes setups for DC2.0 and DC3.0 as well as a particular HighRes setup coming from Colin Hill (for SO) and compute delensing factors for these cases as well.

We have now gone through a few iterations of the the optimization. For each case, I produce a posting that describes the optimization, and one that offers analytic fitting parameters for constructing \(N_{l}\) curves that are fitted to the real \(N_l\)'s (scaled for appropriate effort levels) and lists the fraction of effort spent towards solving the foreground separation problem ("degree-scale effort") and reducing the lensing contribution ("arcmin-scale effort").

For the arcminute scale effort, as described here, to calculate an effective level of residual lensing, an experiment with \(1\) \(arcmin\) resolution, and mapping speed equivalent to the 145 LowRes channel was assumed. In particular, this procedure takes into account achieved on sky detector performance (yield, mapping speed, effective observing time, etc), but unlike the LowRes part, it does not account for mode filtering, which affects sky coverage and S/N per mode, or for noise contributions that take into account the non-uniformity of our surveys. Not taking into account these effects results into a uniform survey-weight and no reduction in the effective numbers of degrees-of-freedom due to filtering -- equivalently it results in deeper maps. One can argue wether this is the right thing to do when using LowRes information to inform a HighRes experiment. To quantify this effect, in addition to what was done for the Science Book, below I also look at taking filtering and survey non-uniformity into account for the HighRes experiment -- this gets the moniker "HighRes (+terms)."

Using an iterative estimator (code developed by Kimmy Wu), following the formalism in Smith et al. (arXiv: 1010.0048), a \(C_{\ell, res}/C_{\ell, lens}\) is calculated, the results of which are presented in this plot. PR stands for the experiment used for "phi/lensing reconstruction," and EM stands for the experiment (or combination of experiments) used for getting the E-modes. The combined map-depth of EM is assumed to be \(1 \mu K\)-arcmin, though we've seen before (from Kimmy) that the ratio of \(C_{\ell, res}/C_{\ell, lens}\) depends quite little on this noise, as seen here. The \(l_{min}\) in the plot is for the E/B inputs to \(\Phi.\) For all optimizations I assume \(l_{min,PR}=200\), \(l_{max}=4000\), and all cases assume complete E-mode coverage (i.e. \(l_{min,EM}=30\)) for the formation of the B-mode template. Practically, this is a scenario in which the arcmin-scale experiment may be noisy at low \(l\), but we can nonetheless measure all of the E-modes through this range to the level of precision required either with the arcmin-scale or degree-scale experiments. This complete E-mode map is then used to form a B-template by lensing these E-modes with the reconstructed \(\Phi\).

Below is a worked example for the case presented here.