This posting is in the style of the post-CDT optimization posting.
Previously to that, there was also a similar optimization which
lead to the definition of the noise spectra that in turn were used to
define Data Challenge 2.0.
In this posting I try to contrast the post-CDT optimization, which was performed under the assumption of 52cm apertures, to a version that considers a 44cm
aperture, as requested by the technical council. This change of aperture
results in larger beams (by 52/44=1.18), while all else is held the same.
1. Worked-out Example; Experiment Specification
Below, similar to Section 2 and 3 of the posting linked above, I present an
application of this framework to an optimization grounded in achieved
performance.
- For this particular example I assume nine S4 channels: {20, 30, 40, 85, 95, 145, 155, 215, 270} GHz, two WMAP channels: {23, 33} GHz and seven Planck channels: {30,
44, 70,100, 143, 217, 353} GHz.
- This example assumes 0.44m apertures, and scales the beams accordingly. Note: This is not true for the 20 GHz channel, which assumes a
beam equivalent to the beam at 30 GHz (i.e we assume that the aperture scales
by the required amount to keep the beam fixed to the one at 30).
-
The assumed unit of effort is equivalent to 500 det-yrs at 150 GHz.
For other channels, the number of detectors is calculated as
\(n_{det,150}\times \left(\frac{\nu}{150}\right)^2\), i.e. assuming comparable
focal plane area. The projections run out to a total of 3,000,000 det-yrs
(3,000,000 det-yrs, if all at
150 GHz, would be equivalent to 500,000 detectors operating for 6 yrs -- this
seems like a comfortable upper bound for what might be conceivable for S4. S4
scale surveys seem likely to be in the range of \(10^6\) to \(2.5\times10^6\)
det-yrs).
- I first want to emphasize that the NET numbers that follow are only used to determine the scalings between different channels, and NOT to calculate
sensitivities.
All sensitivities are based on achieved performance. The ideal NET's per
detector are assumed to be {214, 177, 224, 270, 238, 309, 331, 747, 1281} \(\mu\mathrm{K}_\mathrm{CMB} \sqrt{s}\). This is the last column of the
table in the Band Definition posting. Note: These updated NET's are calculated
for a 100mK bath, as opposed to 250mK before, and are therefore lower than
before.
- The BPWF's, ell-binning, and ell-range are assumed to be l=[30,330]; yielding 9 bins with nominal centers at ell of {37.5, 72.5, 107.5, 142.5, 177.5, 212.5, 247.5, 282.5, 317.5}.
- The Fiducial Model for the Fisher forecasting is centered at \(r\)
of 0, with \(A_{dust} = 4.25\) (best-fit value from BK14) and
\(A_{sync}=3.8\) (95% upper limit from BK14). The spatial and frequency spectral indeces are centered at \(\beta_{dust}=1.59, \beta_{sync}=-3.10, \alpha_{dust}=-0.42, \alpha_{sync}=-0.6\), and the dust/sync correlation is centered at
\(\epsilon=0\). I also introduce \(\delta_{dust}\) -- a dust decorrelation
parameter (that is always ON), and \(\delta_{sync}\) -- a sync decorrelation
parameter (that is always ON). The dust
decorrelation parametrization is exactly as described in Section 5 of the
initial optimization posting. The synchrotron decorrelation parameter is being
introduced here for the first time, and has the same frequency and spatial
form as the dust decorrelation parameter, but is normalized at (23GHz, 33GHz, l=80). While this parameter is let to freely vary in the Fisher optimization,
I do center it at zero, given that we have no good information on this value.
- The Fisher matrix is 10-dimensional. The 10 parameters we are constraining are: {\(r, A_{dust}, \beta_{dust}, \alpha_{dust}, A_{sync}, \beta_{sync}, \alpha_{sync}, \epsilon, \delta_{dust}, \delta_{sync}\)}. Where \(\beta_{dust}\) and \(\beta_{sync}\) have Gaussian priors of \(0.11, 0.30\), and the rest have flat priors.
- As before, I implement delensing as an extra band in the optimization.
See the description underneath Table 1 in this posting for a more in depth description
on how this is done.
2. Parameter Constraints; \(\sigma_r\) performance
2. Conclusions
For the CDT optimization, the end point was chosen to be 1.160M det-yrs,
yielding a \(\sigma_r\) of \(6.5 \times 10^{-4}\) (for the optimal solution).
To achieve the same constraint with a 44cm aperture instrument we need 1.395M
det-yrs -- a 20% increase in effort. For the same level of effort as the CDT
report, we achieve a \(\sigma_r\) that is \(7.4 \times 10^{-4}\) with the 44cm
aperture instrument -- lower by ~13% than the 52cm one.
Possible follow-ups:
- Synchrotron assumptions -- the increase in beamwidth for the low-frequency
channels due to the decrease in aperture might be hindering our ability to
minimize the sync residuals as well.
- 20 GHz channel assumptions -- currently I am assuming that it resides on the
small aperture telescope, and has the same beam as the 30 GHz channel. In the
CDT report Raphael has done some studies that considered moving the 20 GHz
channel to the large aperture dish. Perhaps this is worth revisiting here as
well, though of course the larger \(l_{knee}\) for the large vs small aperture
needs to be accounted for, and might not yield a better result anyway.