This posting is in the style of the post-CDT optimization posting.
Previously to that, there was also a similar optimization which
lead to the definition of the noise spectra that in turn were used to
define Data Challenge 2.0.
In this posting I try to contrast the post-CDT optimization, which was performed under the assumption of 52cm apertures, to a version that considers a 44cm
aperture, as requested by the technical council. This change of aperture
results in larger beams (by 52/44=1.18), while all else is held the same.
In addition I also compare moving the 20 GHz channel to a 6-m class telescope
to the case where the 20GHz channel is on the small aperture telescopes. The
specifications for the high-res 20 GHz channel are presented here.
1. Worked-out Example; Experiment Specification
Below, similar to Section 2 and 3 of the posting linked above, I present an
application of this framework to an optimization grounded in achieved
performance.
- For this particular example I assume nine S4 channels: {20, 30, 40, 85, 95, 145, 155, 215, 270} GHz, two WMAP channels: {23, 33} GHz and seven Planck channels: {30,
44, 70,100, 143, 217, 353} GHz.
- This example assumes 0.44m/0.52cm apertures (w/ 20 GHz either on a high-res
or low-res instrument), and scales the beams accordingly. Note: For the cases where the 20 GHz channel is on a small-aperture telescope this is not strictly true. There I assume a beam equivalent to the
beam at 30 GHz (i.e I assume that the aperture scales by the required amount to
keep the beam fixed to the one at 30).
-
The assumed unit of effort is equivalent to 500 det-yrs at 150 GHz.
For other channels, the number of detectors is calculated as
\(n_{det,150}\times \left(\frac{\nu}{150}\right)^2\), i.e. assuming comparable
focal plane area. The projections run out to a total of 3,000,000 det-yrs
(3,000,000 det-yrs, if all at
150 GHz, would be equivalent to 500,000 detectors operating for 6 yrs -- this
seems like a comfortable upper bound for what might be conceivable for S4. S4
scale surveys seem likely to be in the range of \(10^6\) to \(2.5\times10^6\)
det-yrs).
- I first want to emphasize that the NET numbers that follow are only used to determine the scalings between different channels, and NOT to calculate
sensitivities.
All sensitivities are based on achieved performance. The ideal NET's per
detector are assumed to be {214, 177, 224, 270, 238, 309, 331, 747, 1281} \(\mu\mathrm{K}_\mathrm{CMB} \sqrt{s}\). This is the last column of the
table in the Band Definition posting. Note: These updated NET's are calculated
for a 100mK bath, as opposed to 250mK before, and are therefore lower than
before.
- The BPWF's, ell-binning, and ell-range are assumed to be l=[30,330]; yielding 9 bins with nominal centers at ell of {37.5, 72.5, 107.5, 142.5, 177.5, 212.5, 247.5, 282.5, 317.5}.
- The Fiducial Model for the Fisher forecasting is centered at \(r\)
of 0, with \(A_{dust} = 4.25\) (best-fit value from BK14) and
\(A_{sync}=3.8\) (95% upper limit from BK14). In addition I also test \(A_{sync}=1.0\) and \(A_{sync}=20.0\). The spatial and frequency spectral indeces are centered at \(\beta_{dust}=1.59, \beta_{sync}=-3.10, \alpha_{dust}=-0.42, \alpha_{sync}=-0.6\), and the dust/sync correlation is centered at
\(\epsilon=0\). I also introduce \(\delta_{dust}\) -- a dust decorrelation
parameter (that is always ON), and \(\delta_{sync}\) -- a sync decorrelation
parameter (that is always ON). The dust
decorrelation parametrization is exactly as described in Section 5 of the
initial optimization posting. The synchrotron decorrelation parameter has the
same frequency and spatial
form as the dust decorrelation parameter, but is normalized at (23GHz, 33GHz, l=80). While this parameter is let to freely vary in the Fisher optimization,
I do center it at zero (or 1% decorrelation), given that we have no good information on this value.
- The Fisher matrix is 10-dimensional. The 10 parameters we are constraining are: {\(r, A_{dust}, \beta_{dust}, \alpha_{dust}, A_{sync}, \beta_{sync}, \alpha_{sync}, \epsilon, \delta_{dust}, \delta_{sync}\)}. Where \(\beta_{dust}\) and \(\beta_{sync}\) have Gaussian priors of \(0.11, 0.30\), and the rest have flat priors.
- As before, I implement delensing as an extra band in the optimization.
See the description underneath Table 1 in this posting for a more in depth description
on how this is done.
2. Parameter Constraints; \(\sigma_r\) performance
2. Conclusions
For the CDT optimization, the end point was chosen to be 1.160M det-yrs,
yielding a \(\sigma_r\) of \(6.5 \times 10^{-4}\) (for the optimal solution).
For the same foreground assumptions: \(A_{sync}=3.8\) and \(\delta_{sync}=1\),
to achieve the same constraint with a 44cm aperture instrument we need 1.215M
det-yrs -- a 5% increase in effort. For the same level of effort as the CDT
report, we achieve a \(\sigma_r\) that is \(6.7 \times 10^{-4}\) with the 44cm
aperture instrument -- larger by ~3% than the 52cm one.
Switching the 20 GHz channel to a high-res channel, yields a \(\sigma_r\)
that is 2% better than the low-res version (for the CDT configuration).
Changing the aperture from 52cm to 44cm for the rest of the channels worsens the
constraint by 2%, and requires ~3% more effort to get that back. We see here
that the difference between 52cm and 44cm is smaller than the difference
in the fully low-res instrument case.
A similar type of effect is seen in the other fiducial models with more or less
sync, but similar decorrelation levels.
When switching to higher levels of decorrelation we observe a switch: the
high-res version of the instrument now yields a \(\sigma_r\) that is ~3% worse
than the low-res version (w/ 52cm for all channels except 20GHz). Changing
the aperture from 52cm to 44cm worses the constraint by another 2%. To make up
this cummulative 5% difference requires roughly 7.5% more effort.