Showing posts with label smurf. Show all posts
Showing posts with label smurf. Show all posts

2014-07-25

A new Starlink release contains notable updates to the SCUBA-2 configuration files.


The latest Starlink release - 2014A - has been made public. For details please read the release notes provided at: http://starlink.jach.hawaii.edu/starlink/2014A

As part of this new release we want to highlight one significant update and a couple of new additions to the SCUBA-2 reduction arsenal of config files.

Updates to bright_extended config file

This config file 'dimmconfig_bright_extended.lis' has always been intended for reducing data containing bright extended sources. It has remained untouched for a couple of years now despite advances in our understanding of SCUBA-2 reduction of bright regions. The config file now contains the following parameters/links:

   ^$STARLINK_DIR/share/smurf/dimmconfig.lis

   numiter=-40
   flt.filt_edge_largescale=480
   ast.zero_snr = 3
   ast.zero_snrlo = 2

   ast.skip = 5
   flt.zero_snr = 5
   flt.zero_snrlo = 3


In previous Starlink releases i.e. Hikianalia, the bright_extended configuration file only contained the following:

   numiter = -40
   ast.zero_snr = 5

   flt.filt_edge_largescale = 600


New 'FIX' config file

Two new config parameter files have been added These are intended to be used with one of the existing dimmconfig files. They provide new values for selected parameters aimed at solving a particular problem ("blobs" in the final map, or very slow convergence).

  • dimmconfig_fix_blobs.lis 
    • These parameters attempt to prevent smooth bright blobs of emission appearing in the final map. It does this by 1) identifying and flagging samples that appear to suffer from ringing a soft-edged Butterworth filter in place of the normal hard-edged filter, and 3) rejecting samples for which the separate sub-arrays see a markedly different common-mode signal.
  • dimmconfig_fix_convergence.lis
    • The parameters defined by this file attempt to aid the convergence process, and should be used for maps that will not converge within a reasonable number of iterations.

2013-07-18

Inclusion of CSO Tau Fits for Determining FCFs

During the first few months of 2013 the WVM at JCMT had several periods of instability where it was unreliable for determining the value of tau. Because of this, members of the science team collaborated to produce a collection of smooth polynomial fits to the tau values from the 225-GHz tau meter at the CSO for the affected nights, which can be used to perform extinction correction in place of the WVM tau values.

In the latest version of starlink (which you can get by rsyncing from /stardev at the JAC) makemap will now first check to see if the date of the observation is one of the (somewhat sporadic) dates when the WVM was unstable. This occurs as long as the ext.tausrc parameter is set to "auto," which it is by default. If the date is one of the affected dates, makemap will look for an available fit to the CSO data in the file specified by the parameter ext.csofit, which is set up by default to refer to the collection of CSO fits produced at the JAC. If makemap cannot find a fit for an observation in the specified path it will print a warning that no fit or WVM data is available and refuse to reduce the observation, though this shouldn't ever happen in normal operation.

If you have observations taken between January 19 and May 14 of this year, using the latest version of starlink rsynced from /stardev at the JAC will help ensure that you get the best extinction data available.

The graph below shows a comparison between the FCFs derived from WVM data and those derived using the new CSO tau fits over the period of January-May 2013. The blue diamonds represent FCFs from the WVM, and the tan circles are FCFs from the CSO tau fits.


2013-05-15

Great results from Skyloop

This is just a quick post to show the clear benefits of using skyloop when reducing your own data (see our earlier blog or read: http://www.starlink.ac.uk/docs/sc21.htx/node51.html). Below is an image composed of several observations reduced individually using the iterative map maker. "Blooms" (bright false emission) are clearly evident at the edges of maps where high noise has made its way into the ast model.

 
Reducing the same data using skyloop helps to identify what is noise and what is true emission-particularly in the overlap regions. The benefit in this situation is clearly seen. The cost to the user is in computing power and time but the result is clearly worth it.


2012-09-12

FIT1D: A new SMURF command for ACSIS data

More time ago than I am willing to admit, I started coding a Starlink routine to fit spectral lines in ACSIS cubes. Got a long way until SCUBA-2 commissioning and calibration put a halt to it, but I have finally managed to finish the program, technically a beta-version, as part of the upcoming Kapuahi release of Starlink.

Usage:
% fit1d  in  out  rms  [config]  [userval]  [pardir]  [parndf]  [parcomp]

What distinguishes FIT1D from other profile fitting routines is that it specifically attempts to deal with two issues: non-gaussian profile shapes and the fact that ACSIS data-cubes have many, potentially very different, profiles to fit. Regarding the latter, there are many fitting routines that produce fitted profiles for data-cubes, but FIT1D also produces cubes with the fitted parameters themselves and can use such files as input to give control over the fit of, in principle, each individual profile in the cube. Thus it is e.g. possible to fit broad lines on the nucleus of a galaxy and narrow lines everywhere else. More about that below.

A section has been added to the SMURF documentation in SUN/258 about FIT1D, which I will try to summarize here. FIT1D is generic in that it can fit profiles along any axis of an up to 7-dim hyper-cube, but will be discussed here in the context of a default RA-Dec-Vel ACSIS cube. Note that the routine assumes that data have been baseline-subtracted, using e.g. MFITTREND, i.e. that the profiles have a zero-level at 0. 


Gauss-Hermite shapes as a function of the 3rd-order
      skewness coefficient 'h3' and the 4th-order the kurtosis (peakiness)
      coefficient 'h4'. The red box indicates the limits on acceptable
      values for h3 and h4 as defined in the defaults configuration file. Note
      that the fitted profile by default is restricted to positive values
      and will omit the shown negative features.

Non-gaussian Profiles.

FIT1D essentially re-implements the fitting code for non-gaussian profiles from the GIPSY package (Kapteyn Institute, Groningen, The Netherlands). Function types that can be fitted are Gaussian, Gauss-Hermite, and Voigt profiles. In particular, Gauss-Hermite functions are a powerful extention when fitting profiles that are skewed, peaky, or only approximately gaussian. The figure above shows Gauss-Hermite profiles as a function of the skewness coefficient h3 and the kurtosis (peakiness) coefficient h4. See  for SUN/258 further details, but note that the default setting in the configuration file is for FIT1D to suppress the negative features in the fitted profiles and to leave only the positive part of Gauss-Hermites.

Because of their ability to fit distorted shapes, Gauss-Hermites are particularly well suited to "capture" the maximum amount of emission from a cube. The fits can be remarkably accurate as is shown in the the figure below showing a 3-component fit (i.e. up to 3 spectral lines) using gausshermite2 functions (i.e. fitting both h3 and h4). Collapsing the resulting cube with fitted profiles can thus result in an accurate and almost noise-free white-light or total-emission map.
Fit1d - Black: original profiles; Red: results of a
    3-component Gauss-Hermite2 fit (fitting both h3 and h4)


FIT1D derives its ability to fit a complex line-shape both from the Gauss-Hermite function but also from that it can fit multiple (sub) components to get the best match possible. However, that can make the interpretation of the fits in terms of the physical characteristics and quantities difficult, hence for those you may also want to make a fit of the line-shape using a single standard Gaussian function. 

Component Parameter files

Besides a data-cube with the fitted profiles FIT1D also outputs so-called Component parameter files as NDF extensions in the header of the output file. These can also be copied out as independent data-cubes. There is a file for each component (i.e. line) that was fitted along the profile up to the number of components requested by the user. Each plane of a Component parameter file has an image of the value of a fitted parameter across the field-of-view. For instance, the one resulting from a gaussian fit has images respectively showing the fitted Amplitude, Position (velocity), and FWHM as well as a plane with an id-number of the function used.

Much of the (anticipated) use of FIT1D derives from the fact that Component parameter files can be used as input as well: either to provide initial estimates or fixed values to the fitting routine.  The difference between values specified in the Component parameter files
and ones declared in a User parameter values file is that the former can vary across the field-of-view whereas the latter will result in the same value being used for all profiles. E.g. for use with spectral-line surveys the User parameter values file can be used to provide initial estimates of the frequencies or velocities at which lines are expected or to fix fits at those frequencies.

By manipulating Component parameter files e.g. resulting from an initial fit, the user can customize or correct subsequent fits. In extrema, a Component parameter file could be made from scratch based on a model and be used to create a spectral-line data-cube with that model (config option: model_only=1) or be used as initial estimates for a fit. Of more practical use, Component parameter files can be used to correct problems associated with a fit since the art of fitting is not in the fitting algorithm, but in providing accurate initial estimates. For instance, the left image below shows a section of an Amplitude plane of a fit where there are problems in a few locations. Setting these location to bad values and using FILLBAD to interpolate over them, the corrected Component parameter file was used as initial estimate for a subsequent fit, resulting in the image on the right

Fit1d - Left: Section of a parameter file showing
      originally fitted amplitudes; Right: Amplitudes after using a
      corrected parameter file from the original fit as initial estimates
      for a subsequent fit.

More creative options are possible: after an initial fit with a gaussian, the function id can be changed to a gausshermite1 in part of the field and the resulting file used as initial estimates for a subsequent fit to account for skewed profiles there. Similarly, the initial guess of the FWHM can be made wide on e.g. nucleus of a galaxy while leaving it more narrow outside. As another example, the fit of multiple components can be limited to only part of the field by setting the parameter file for the second and higher components to bad values outside the relevant region (multiple component parameter files can be used as input: one for each component to be fitted).

In conclusion: please remember that this is a beta-release and that you may run into unanticipated issues. Also chosen limits in the configuration file may need tweaking. If an initial fit looks poor, try adjusting minamp (in units of rms!) or, in particular, minfwhm (in units of pixels!) in the configuration file (see: $SMURF_DIR/smurf_fit1d.def). Also use range to limit the fit to a relevant region of the spectrum.

The released implementation of FIT1D can fit up to 7 components per profile per run, but the output of multiple runs each covering a range in velocities or frequencies can be combined. The fit itself is fully multi-threaded and will be much faster on a modern multi-core computer: a 3-component gausshermite2 fit of 1.1 million spectra (a 2 Gb input file) took 15 minutes on a dual-core, 16 Gb memory machine versus 4 minutes on one with 12 cores and 75 Gb of memory.

Happy fitting!

Remo



2012-06-17

Displaying an outline of the mask used to create a map

The smurf:makemap command allows a mask to be specified that defines background areas on the sky. These background areas are forced to zero on all but the last iteration in order to suppress spurious large scale structures. The user may supply an external NDF to be used as the mask, or alternatively makemap can be left to generate its own mask based (for instance) on the evolving SNR at each pixel. Either way, it is often useful to visualise the mask that was used to create a given map.This can be done using KAPPA commands as follows (assuming the map created by makemap is in file "map.sdf"):

% kappa
% lutgrey
% display map
% contour clear=no mode=free heights=0.5 comp=quality map style='colour=red'


This will display the final map as a greyscale image with the mask outlined in red. Other properites of the mask outline (colour, line thickness, etc) can be specified by including other options in the "style" parameter value when running contour.


2012-05-31

Preventing self-masks from diverging

Masks can be used in various ways in MAKEMAP. For instance, the dimmconfig_bright_extended.lis file specifies:

ast.zero_snr = 5

which causes the areas of the map that have a signal-to-noise ratio of less than 5 to be masked on each iteration. This is a dynamic mask, in that a new and potentially different mask is generated and used on each iteration. However, the added flexibility introduced by allowing such a mask to change on each iteration can result in eventual divergence of the iterative  map-making algorithm. To prevent this, a set of new configuration parameters has been added that result in the mask being frozen after a specified number of iterations. These parameters are AST.ZERO_FREEZE, COM.ZERO_FREEZE and FLT.ZERO_FREEZE. Each of these parameters may be set to a positive integer value, indicating that he corresponding mask should be frozen after that many iterations. They all default to zero, which reproduces previous behaviour (i.e. the mask is never frozen).

2012-05-25

Better core utilisation

Over the past couple of weeks I've modified several parts of the makemap command to split up their calculations between all available threads rather than doing them all in a single thread. This means that makemap should now run significantly faster on multi-core machines. The speed up will be greater for larger data sets than for smaller data sets. As an example, producing a map using a single subarray from a  40 minute observation took  11 mins 49 secs with the new scheme on my 12 core machine, and achieved an average CPU load of 791%. With the old scheme, the same map took 19 minutes 14 secs with an average CPU load of 461 %.

2012-05-17

Changes to the common-mode estimation

For the impatient: the new improved algorithm described below is enabled by default and requires no changes to scripts or configuration files. However, the old algorithm is still available, and can be used by setting COM.OLDALG=1 in the configuration file.

For extended sources, MAKEMAP usually uses the mean normalised change in map pixel values between iterations as a measure of convergence. This value drops rapidly over the first few iterations, and eventually levels out, typically somewhere between 0.01 and 0.05. However, if MAKEMAP is allowed to continue iterating beyond this point, the mean normalised change between iterations often starts to increase and decrease by large amounts in an erratic manner. Examining the maps shows that these changes reflect the creation of nasty artifacts in the map, which often take the form of individual bolometer tracks. As an example, Dave Nutter saw the following variations in mean normalised change with iterations for one particular observation:


A relatively smooth decrease until about iteration 50 and then madness. For illustration the following shows the map after 50 iterations (left) and 200 iterations (right). Notice the visible streaks that have appeared by iteration 200.



The cause of this wild behaviour seems to be the way the common-mode signal is estimated and used. On each iteration, the common mode value at each time slice is taken to be the mean of the bolometer values at that time slice, averaged over all remaining bolometers  (i.e. bolometers that have not previously been flagged as unusable for one reason or another). Each bolometer time series is then split into blocks of 30 seconds and compared to the corresponding block of the common-mode signal. This comparison takes the form of a least squares linear fit between the bolometer value and the common-mode value, and generates a gain, offset and correlation coefficient for each bolometer block. Bolometer blocks that look "unusual" - either because they have a low correlation coefficient (i.e. do not look like the common mode) or an unusually low or high gain - are flagged, and all the bolometer values within such blocks are rejected from all further processing. So on the next iteration, the common mode within each block is estimated from only the subset of bolometers that passed this check on all previous iterations. With the passing of iterations, more and more bolometer blocks are rejected, often resulting in the common mode in a block being estimated from a completely different subset of bolometers to those of the neighbouring blocks. This produces discontinuities in the common-mode signal at the 30 second block boundaries, which get larger as more iterations are performed. When the common-mode is subtracted from the data, these discontinuities are transferred into the residuals, which are then filtered within the FLT model. The sharp edges at the discontinuities cause the FFT filtering to introduce ringing (strong oscillations on the scale length of the filter that extend over long times). The following plot shows an example of these discontinuities in the common-mode estimate:


The horizontal axis is sample index. The red curve is the common mode at iteration 85 and the white curve is the common mode at iteration 99. Due to down-sampling, 30 seconds corresponds to 2400 samples, and discontinuities at intervals of 2400 samples are clearly visible. In addition, extensive oscillations are visible, particularly between samples 48000 and 50000.

In order to avoid this problem, changes to the way the common-mode is estimated and used have been made. The heart of the change is that bolometer blocks that are flagged as unusual are no longer excluded from the estimation of the common-mode on the next iteration - all such flags are cleared before the new COM estimate is made. So the common mode is always based on data from all bolometers, thus avoiding the discontinuities at the block boundaries. Other more minor changes in the code include:

  1. The common-mode value at each time slice can be estimated using a sigma-clipped mean, rather than a simple mean. The new configuration parameters COM.NITER (the number of iterations) and COM.NSIGMA (the number of standard deviations at which to clip on each iteration) control this algorithm. In practice, sigma-clipping does not seem to add much, and may slow down convergence. At the moment, both of these parameters default to 3. Changing COM.NITER to 1 results in a single simple mean being calculated with no clipping. This may become the default in the near future.
  2. No attempts are now made to refine the common mode. Previously, once the original COM signal was estimated and unusual bolo-blocks were flagged, a refined COM signal was formed by excluding the flagged bolo-blocks. Further unusual bolo-blocks were then flagged by comparing them with the refined COM signal. This refinement process was repeated until no further bolo-blocks were rejected. In the new code, no attempts are made to refine the original COM signal resulting in the new algorithm being significantly faster than the old algorithm.
Note, the new algorithm still uses COM.GAIN_BOX to define the size of the blocks in which the gain and offset of each bolometer is estimated. Also, "unusual" blocks are still flagged in the same way and excluded from the subsequent FLT model and map .

The new algorithm seems effective at overcoming the problem of divergence illustrated at the start of this post. For instance, with the new code the variation of mean normalised change between iterations for the same data looks as follows:


 Note that:

  • The wild variations above iteration 80 have gone.
  • The curve drops somewhat faster at the start (i.e. fewer iterations are needed to achieve a given maptol).
  • Each estimation of the common-mode signal is faster than before due to the lack of the refining process.
  • Far fewer samples are rejected from the final map ( 2.38 % of the data is now flagged by the COM model, as opposed to 17.37 % previously).
  • The resulting maps at iteration 50 and 200 are shown below.  There is very little difference between them, and they both look visually like the iteration 50 map from the old algorithm shown above.

2012-05-01

Interrupting makemap using control-C

So you're sitting there waiting for a long run of makemap to finish thinking "It's going to take ages! I asked for far more iterations than I need, but I can't restart it because it's taken so long to get to this point". As of git commit efde7077500 you can simply hit control-C and all will be well - makemap will continue running until the end of the next iteration. It will then create a map from the information it has, and terminate normally. Note, it needs to perform one more whole iteration before creating the map in order to get proper pixel values for the areas masked by ast.zeromask. One thing to beware of though - if the input data is split into more than one chunk, subsequent chunks that have not yet been started will not be included in the map.

If you want to abort NOW! without a map,  rather than waiting for the next iteration to complete, then press control-C a second time.

Note, if you are running makemap within a shell script, then the shell may handle the control-C signal itself, leading to some potentially odd behaviour. For instance,  the script may appear to terminate immediately, but in fact may leave the makemap process running in the background until the next iteration has completed. Most shells have ways of controlling what happens when an interrupt signal is detected. For instance, the (t)csh has the "onintr" command.

2012-04-17

Masking of FLT and COM models

In previous blogs (e.g. Suppressing large-scale noise with zero-masks), Ed has described how masking the AST model - the astronomical signal - between iterations can reduce the amplitude of low frequency structures in the final map. Over the last month or two, I have extended the uses of masks to allow the COM and FLT models to be masked in a similar way to the AST model.

COM masking: The common-mode signal is the average of all available bolometer time streams. Once found, a least-squares linear fit is performed between the average and each individual bolometer time stream. This fit generates a scale factor and offset, together with a correlation coefficient. Bolometers that have a low correlation with the common-mode are rejected as bad. At any one time, the astronomical source is seen only by a subset of the bolometers, and so will be largely absent in the common-mode estimate. This means that bolometers that see the source will have a lower correlation to the common mode than those that do not see the source, which can result in bolometers being rejected unnecessarily. To avoid this, it is now possible to exclude samples that see the source from each time stream when finding the common-mode signal, and also when performing the least squares linear fit. This can produce a better estimate of the common-mode, unpolluted by bright sources samples, and also better estimates of the correlation coefficients, resulting in fewer bolometers being rejected.

The masking of the COM model is controlled by the new configuration parameters COM.ZERO_MASK, COM.ZERO_CIRCLE, COM.ZERO_LOWHITS and COM.ZERO_SNR. These are all analogous to the corresponding set of AST.xxx parameters used for masking the AST model. As an example, a configuration of:

^/star/share/smurf/dimmconfig_bright_compact.lis
com.zero_circle = 0.02

was used to produce an 850 um map of Mars from observation 40 on 20120202. This excludes samples from the COM model that are within a circle of radius 0.02 degrees, centred on the source. The resulting final map contained 6744407 samples (or 2877.31 bolos). Without the COM masking, the final map contained 4676093 samples (1994.92 bolos).

Sadly, not all observations show such an improvement using COM masking. Indeed, some show little or no improvement, and some actually show a decrease in the number of samples in the final map. For this reason, COM masking is not switched on by default in any of the dimmconfig files distributed with SMURF.


FLT masking: A similar set of configuration parameters - FLT.ZERO_MASK, FLT.ZERO_CIRCLE, FLT.ZERO_LOWHITS, etc., are now available to control masking of the FLT model. The FLT model holds the low frequency background for each bolometer time stream, and is estimated by smoothing the time stream using a filter defined by FLT.FILT_EDGE_LARGESCALE, etc. On the first iteration, any bright astronomical source in the data will result in the smoothed background being over-estimated in the vicinity of the source. This in turn, produces a negative bowl around the source when the background is removed. It can take many iterations for the map-maker to reduce this bowling to acceptable levels. Using the new FLT masking parameters allow the source to be omitted before estimating the background in each bolometer time stream (the source samples are replaced by a values interpolated from the neighbouring data, with added noise). Masking the FLT model in this way on the first two iterations can have a dramatic effect on the number of iterations need to reach convergence. For compact sources, it can typically reduce the number of iterations required to 50% of the original number, and sometimes fewer.

For this reason, FLT masking is switched on by default in the dimmconfig_bright_compact.lis file, using the settings:

flt.zero_circle = 0.0167
flt.zero_niter = 2

The effect of FLT masking on extended sources is marginal, and so is not switched on by default in dimmconfig_bright_extended.


2011-11-16

Supplying an externally-generated zero mask

In an earlier post I described zero-masking , in which the map is forced to a value of zero in certain locations to help with convergence (i.e. reduce saddles, negative bowls around sources etc.). The two methods that have been available until now are: defining circular regions beyond which the map is set to zero (good for known compact sources); and using a threshold on the local SNR (an automatic way to identify source and blank sky).

Since SMURF still handles non-continuous pieces of data independently (i.e. multiple maps of the same field), the SNR of an individual map may not be sufficiently large to produce a good mask of source / blank sky, whereas the combination of all data sets do. For those cases, and other examples where you may know, a priori, where to expect emission, a new facility has been added so that the user can define a zero mask externally and then provide it to SMURF.

In this illustrative example, I use one of the OMC-1 maps from 2010 with the s4a array: obs 18,22,23 on 2010216 and obs 27,29,33,34 on 20100218. First, we make a map using the bright_extended configuration (which uses SNR-based zero masking; I also use a lot of down-sampling and big map pixels to speed things up for this example):

makemap ^names_450.txt map_450_be pixsize=8 \
method=iterate \
config='"^/stardev/share/smurf/dimmconfig_bright_extended.lis"'

The resulting image looks reasonably good, but there are obvious negative bowls around the bright extended sources. However, the SNR of the final image is quite good, so I create a new mask based by thresholding the SNR map (4-sigma), and then smoothing it with a 5-arcsec FWHM Gaussian to spread out from where the sources are slightly:

makesnr map_450_be map_450_be_snr

thresh map_450_be_snr mask_450 thrlo=4 newlo=0 \
thrhi=4 newhi=1


gausmooth mask_450 mask_450_sm fwhm=5


thresh mask_450_sm zero_mask_450 thrlo=0.05 newlo=bad \
thrhi=0.05 newhi=1


I can then feed this new "zero_mask" back into makemap and re-run the data:

makemap ^names_450.txt map_450_zm pixsize=8 \
method=iterate \

config='"^/stardev/share/smurf/dimmconfig_bright_extended.lis,itermap=1,ast.zero_mask=1,ast.zero_snr=0"' \
ref=zero_mask_450



The key things to note are: (i) I turned off the SNR thresholding, ast.zero_snr, that is the default for the bright_extended configuration, and (ii) the new zero mask is provided as the REF image (also ensuring that the map and mask are on the same pixel grid), and to use it we set ast.zero_mask=1.

In the following images I show (from top-left): (i) an arm sticking out towards the north of OMC-1 using the default bright_extended reduction; (ii) the same region after using the updated zero-mask; (iii) the first map subtracted from the second map to illustrate the improved response to large-scale structure; (iv) the original combined zero mask based on the SNR of map pixels in individual scans; (v) the new zero mask based on thresholding/smoothing the combined SNR map:

2010-08-05

SMURF Update (August 5th 2010)

It's been a couple of months since the last SMURF news entry so I thought I'd bring people up to date.

  • All the configuration files for the iterative map-maker have been tweaked and some have been renamed. For example the '_faint' config is now called "dimmconfig_blank_field.lis". We also have a new config file for bright calibrators called "dimmconfig_bright_compact.lis".
  • The SMO (time series smoothing) model has been improved. A few bugs have been fixed and it's been parallelized and so is much faster. SMO is not enabled in any of the default configuration files.
  • The size of the apodization and padding for the FFTs can now be calculated dynamically based on the requested filter. This is now the default behaviour. A new Fourier filter has also been written that does not require apodization (which may be important for very short maps) but is still being tested.
  • Quality handling has been revamped inside the map-maker to allow us to report more than 8 different types of flagging. The report at the end of each iteration is now more compact and if you use the SHOWQUAL command to look at exported models you may see that the bit numbers assigned to a particular quality are no longer fixed. Additionally if more than 8 flags are used the exported model will combine related flags (for example PAD and APOD will be merged into ENDS).
  • Very noisy bolometers will now be discarded before the iterative map-maker starts. This can help with convergence. See the "noiseclip" config parameter to adjust this.
  • The map-maker now compares flatfield ramps taken at the start and end of each observation and disables bolometers that whose calibration has varied too much. This will not help data taken prior to 20100223 where flatfield ramps are not available.
  • The step correction algorithm continues to be improved.
  • SC2CLEAN will now report quality statistics in verbose mode.
  • SC2FFT can now be given a bad bolometer mask.
The cookbook has also been updated and can be read online.

2010-06-04

SMURF update (June 4th 2010)

We've been making incremental changes to SMURF over the last few weeks so here are a few highlights that have not been covered in other blog entries.

  • We've updated the extinction correction parameters that scale from CSO to the relevant filter to use the number just calculated by Jessica Dempsey. You can override these values by setting ext.taurelation.filtname in your map-maker config files to the two coefficients "(a,b)" that you want to use (where filtname is the name of the filter). The defaults are listed in $SMURF_DIR/smurf_extinction.def. We have also added a slight calibration tweak to WVM-derived values to correct them to the CSO scale.
  • We have changed the calling interface for the sc2clean command so that it now takes a map-maker config file as input rather than individual command line parameters. This should make it easier to compare what the map-maker is doing to what sc2clean is doing.
  • Two new (experimental) models have been added to the iterative map-maker. They are slow and they are experimental:
    • SMO will do a boxcar smooth to each time series to calculate the low frequency variation. This might be more reliable than using the FLT model if the only aim is to remove low frequencies.
    • PLN will fit and remove a plane from each time slice.
  • There have been some enhancements to the step finding code so that it not only finds the steps more accurately but can also find correlated steps that occur at the same place for all bolometers.
  • The map-maker has a new ast.zero_notlast parameter that can be set to true when used in conjunction with ast.zero_lowhits to disable the final iteration from being forced to zero.
All these changes are available in the 64-bit linux stardev rsync location and also in the most recent Mac Snow Leopard tar ball.

2010-05-07

Typos in config files

This week I've made a couple of minor changes to the way the iterative map-maker handles configuration files. On too many occasions I've made a change to a config parameter and seen no effect because of a typo or else I've left a parameter unspecified and had no idea what that really meant.

We've now fixed both these problems:

  • Defaults are now read from a file called $SMURF_DIR/smurf_makemap.def
  • When user-supplied parameters are read in they are merged with the defaults and if there is any keyword used that is not present in the default file you will get an error immediately.
  • When the history is written out (use KAPPA HISLIST to see it) it contains the values that were actually used rather than just the values that were supplied by you. These are much more useful when trying to work out exactly what is different between two reductions.
I did have to change the way config files work in one small way. Previously if you wanted your config file to include overrides based on the wavelength you would have to say "flt_850.something" or "flt_450.something". To allow the above changes to work we've had to reorganize that slightly so you now say "450.flt.something" and can now override all the normal parameters in this way for a particular wavelength.

2010-02-25

SCUBA-2 DR status report

We have been making some good progress with the data reduction software. Since hawaiki there have been updates to the flatfielding and an improvement in the code that detects working bolometers by comparing the common-mode signal. The down side of all this work is that for people getting their data from shared-risks observing they need to be using the cutting-edge version of SMURF and not the hawaiki version. The switch of flatfielding technique actually means that hawaiki will not generate a reasonable map at all using hawaiki SMURF.

To get the latest version of SMURF there are a number of options:

  • Use rsync to get a version for Centos5.4 (aka Scientific Linux 5.4 or RedHat Enterprise Linux 5.4) for either 32- or 64-bit linux. We attempt to keep these builds up to date. To see just how new you can run "$STARLINK_DIR/Perl/bin/starversion" and look at the date.
  • I can periodically make available 64-bit OSX Snow Leopard starlink releases. Check this directory periodically.
  • Download the source code and build it yourself (instructions on the Starlink home page)
As of 20100223 we have modified data acquisition to include a fast flatfield "ramp" at the start and end of every observation. These provide a direct calibration and should be more accurate than doing standalone flatfield observations as we have done previously (and continue to do). SMURF can not support these ramps at the moment but I am working on the issue. When the new code is working we will re-process the data from 20100223 at CADC.

2008-02-01

Creating artifical time series from a sky cube

I've just added a new command to smurf called UNMAKECUBE. It takes a (ra,dec,spec) sky cube and generates artifical time series by resampling the sky cube at the detector positions in a set of existing reference time series NDFs. It's a sort of inverse to MAKECUBE. It should be useful for investigating baselines, and for iterative map making. I'm currently playing around with it, using data from Christine Wilson.