Maximum likelihood search with expectation values as input

 (C. Umiltà)

Summary

I run ML search with expectation values as input data. Information on dc02 maps can be found here. I try three different configurations:

  1. all parameters (\(r, A_d, A_s, A_L) \) are zeroed except one, for which I test if the ML search can recover different expectation values. I vary one variable and plot that variable only, while all other parameters are zeroed and kept fixed in the ML search;
  2. same as before, only parameters which are not varied are kept fixed at their normal expectation value;
  3. all parameters are set free, and for one of them I test different expectation values. I vary the expectation value of one variable and plot that variable only but the ML search fits all of them.
Results are shown in Fig. 1, where I plot the recovered ML value against the expectation value. The horizontal axis displays the incremental change in the varied parameter. Different panels show results for different varied parameters. We can observe that:
  1. overall precision is at worst ~0.1%;
  2. \(r\) precision degrades more than other parameters as parameters are set free;
  3. priors don't seem to affect the precision on recovery of \(A_d, A_s\);
  4. when all parameters are zeroed, negative \(r\) is not recovered, but this is strongly not physical. Instead when foregrounds and lensing are not zero even negative \(r\) values are recovered correctly.

Figure 1:
ML search results where the input data correspond to the expectation values of the model. The left column shows the expectation value (black dashed) and the recovered value (blue dot) of the varied parameter. Right column shows the relative difference of the ML search result wrt the the expectation value. Blue/orange points are for results with Gaussian priors \(\beta_{dust}\)=(1.6, 0.11) and \(\beta_{sync}\)=(-3.1, 0.3), or top hat priors \(\beta_{dust}\)=[0.8, 2.4] and \(\beta_{sync}\)=[-4.5, -1.5]) respectively.