2013-10-22

How to loose fewer samples with NOI.BOX_SIZE

At the end of the first iteration, when the common-mode signal (COM model), low frequency drift (FLT model), and the astronomical signal (AST model) have all been removed from the data, the remaining residuals are used to estimate the variance of the high frequency noise in each bolometer (the NOI model). The reciprocal of these variances are used as weights on the second and subsequent iterations, and are applied to the time-series values that are binned to form the map.

By default, the NOI model contains a single variance value for each bolometer , and so all samples from a single bolometer get the same weight in the map. However, the NOI.BOX_SIZE parameter can be used to force each bolometer to have multiple variance estimates, with one value for each box of samples. See this post

Whilst this is often in an improvement over the default situation in that it allows for varying noise levels, it has two potential issues:
  1. All samples in a box are given the same variance value, thus introducing the potential for steps in the weights and corresponding artifacts in the map.
  2. The noise in each box is found by taking the FFT of the data and finding the average power in the 2 to 10 Hz band. Using an FFT is problematic on short time streams, as any missing values (such as caused by steps, jumps, common-mode flagging, etc) have a proportionally larger effect, causing the FFT to be less representative of the data. A requirement that the time stream should have a majority of unflagged values in order to be usable means that a lot of data is often rejected because a reliable FFT cannot be obtained.
An alternative scheme that avoids these two issues has been added to makemap, which is enabled by setting "NOI.BOX_TYPE =1" in your configuration. This option causes the noise at a sample to be set equal to the variance of the neighbouring sample values - those that are less than half a box size (i.e. NOI_BOX_SIZE / 2) away from the sample. This scheme does not involve taking an FFT and produces a continuously varying estimate of the variance and so avoids steps in the weights used to make the map. It also is minimally affected by flagged samples and so results in fewer data samples being flagged as unusable (as shown by the "NOISE" entry in the samples statistics at the end of each iteration).

2013-10-08

Weighting different observations in makemake and skyloop

Both makemap and skyloop can be used to produce maps from multiple overlapping observations. Up to now, it has not been possible to control the weights used for each observation when forming the total co-added map. Instead, fixed weights equal to the reciprocal of each pixel variance were used. The new CHUNKWEIGHT configuration parameter can now be used to specify a weight for each observation. The weights determined from the pixel variances are multiplied by these user-specified weights, which default to unity.

The weights can be specified in two ways:
  1. The CHUNKWEIGHT configuration parameter can be set to an arbitrary algebraic expression in which the variable names are the names of FITS keywords. The keywords must exist in the input data and have numerical values. The expression should be enclosed in double quotes. As an example, you could include this line in your configuration file:

       chunkweight = "AMSTART * WVMTAUST"

  2. The values of the FITS keywords are read from the observation header and substituted into the expression to get the weight to use for the observation.

  3. A comma-separated list of explicit numerical weights can be supplied, enclosed in parentheses. If insufficient values are given in the list, the last value is replicated as often as is necessary.

       chunkweight = (1.0,2.0,3.0)


Holes in heterodyne maps

HARP has only twelve operational receptors since May 20.  If one or two also suffer from interference and fail quality assurance, then there can be locations in the reduced spectral cubes where no valid data are present.  Such missing spectra (shown in blue) will often be arranged in a grid pattern as in the example below.


This can create issues if you wish to integrate flux or simply improve the cosmetic appearance.

One approach is to smooth the data but this degrades all the spectra.  Instead use KAPPA::FILLBAD. This replaces just the bad values with a smooth function derived from neighbouring values, derived by iterating to a solution of Laplace's equation.

Just using FILLBAD's default values will duly fill in the holes.  You might prefer not to smooth in the spectral axis by setting Parameter SIZE to [s,s,0] where s is the initial scale length in pixels.  The zero means do not smooth along the third axis. The scale length should have a value about half the size of the largest region of bad data to be replaced.  Since the largest bad regions apart from the cube peripheries are two pixels across, a command like this
fillbad in=holey out=filled  size="[1,1,0]"
is appropriate.  The graphic below shows the same region as before, but with the holes filled in.


FILLBAD does what it says on the tin.

There will be some smoothing, but it's not obvious.  Below is a KAPPA::CLINPLOT graphic for a part of the filled cube.  Each spectrum is a pixel in the image above. Can you identify the three spectra which were previously bad?