2010-06-04

SMURF update (June 4th 2010)

We've been making incremental changes to SMURF over the last few weeks so here are a few highlights that have not been covered in other blog entries.

  • We've updated the extinction correction parameters that scale from CSO to the relevant filter to use the number just calculated by Jessica Dempsey. You can override these values by setting ext.taurelation.filtname in your map-maker config files to the two coefficients "(a,b)" that you want to use (where filtname is the name of the filter). The defaults are listed in $SMURF_DIR/smurf_extinction.def. We have also added a slight calibration tweak to WVM-derived values to correct them to the CSO scale.
  • We have changed the calling interface for the sc2clean command so that it now takes a map-maker config file as input rather than individual command line parameters. This should make it easier to compare what the map-maker is doing to what sc2clean is doing.
  • Two new (experimental) models have been added to the iterative map-maker. They are slow and they are experimental:
    • SMO will do a boxcar smooth to each time series to calculate the low frequency variation. This might be more reliable than using the FLT model if the only aim is to remove low frequencies.
    • PLN will fit and remove a plane from each time slice.
  • There have been some enhancements to the step finding code so that it not only finds the steps more accurately but can also find correlated steps that occur at the same place for all bolometers.
  • The map-maker has a new ast.zero_notlast parameter that can be set to true when used in conjunction with ast.zero_lowhits to disable the final iteration from being forced to zero.
All these changes are available in the 64-bit linux stardev rsync location and also in the most recent Mac Snow Leopard tar ball.

Extinction correction factors for SCUBA-2

Analysis of the SCUBA-2 skydips and heater-tracking data from the S2SRO data has allowed calculation of the opacity factors for the SCUBA-2 450μm and 850μm filters to be determined.

Some background: the Archibald et al (2002) paper describes how the CSO(225GHz) tau to SCUBA opacity terms were determined for the different SCUBA filters. It was assumed for commissioning and S2SRO that the new SCUBA-2 filters were sufficiently similar to the wide-band SCUBA filters that these terms could be used for extinction correction. For reference the SCUBA corrections were:

Tau(450μm) = 26.2 * (Tau(225GHz) - 0.014)

and

Tau(850μm) = 4.02 * (Tau(225GHz) - 0.001)

The JCMT Water vapour radiometer (WVM) now is calibrated to provide a higher-frequency opacity value which has been scaled to the CSO(225GHz) tau. The WVM (not the CSO 225GHz tipper) data was used for this analysis.

The new filter opacities as determined by the skydip data are as follows:


Tau(450μm) = 19.04 * (Tau(225GHz) - 0.018)

and

Tau(850μm) = 5.36 * (Tau(225GHz) - 0.006)


A follow-up post to this will show analysis of the difference applying the new corrections can make to data combined from multiple observations taken in differing extinction conditions.

It is worth noting that if an individual science map and corresponding calibrator observation is already reduced with the old factors (and your source and calibrator are at about the same airmass and if the tau did not change appreciably), any errors in extinction correction should be cancelled out in the calibration.


Applying FCFs to calibrate your data

Calculating SCUBA-2 Flux Conversion Factors (FCF's)

Currently SCUBA-2 reduction software: the pipeline and the PICARD recipes produce three separate FCF values. Details of the PICARD recipes can be found on Andy's PICARD page.

For calibration from point sources the FCFs and NEFD's have been calculated as follows:
  1. The PICARD recipe SCUBA2_FCFNEFD takes the reduced map, crops it and runs background removal (and surface fitting parameters are changable in the parameter file).
  2. It then runs the Kappa beamfit program on the specified point source. Calibrators such as CRL618, HLTAU, Uranus and Mars are already hard-coded into the recipe. If it is not, then you can add a line to your parameter file with the known flux: FLUX_450 = 0.050 or FLUX_850=0.005 for example. Beamfit will calculate the peak flux, the integrated flux over a requested aperture (30 arcsec radius default), and the FWHM etc.
  3. It then uses the above to calculate three FCF terms described below.
FCF (arcsec)

FCF(arcsec) = Total known flux (Jy) / [Measured integrated flux (pW) * (pixsize2)]

which will produce an FCF in Jy/arcsec2/pW.

This FCF(arcsec) is the number to multiply your map by when you wish to use the calibrated map to do aperture photometry.  

FCF(beam)  

FCF(beam) = Peak flux (Jy/beam) / [Measured peak flux (pW)] 

producing an FCF in units of Jy/beam/pW.

The Measured peak flux here is derived from the Gaussian fit applied by beamfit. The peak value is susceptible to pointing and focus errors, and we have found this number to be somewhat unreliable, particularly at 450μm. This FCF(beam) is the number to multiply your map by when you wish to measure absolute peak fluxes of discrete sources.

To overcome the problems encountered as a result of the peak errors, a third FCF method has been derived, where the FCF(arcsec) is taken and modeled with a Gaussian beam with a FWHM equivalent to that of the JCMT beam at each wavelength. The resulting FCF calculates a 'equivalent peak' FCF from the integrated value assuming that the point source is a perfect Gaussian.

FCF (beamequiv)  

FCF(beamequiv) = Total flux (Jy) x 1.133 x FWHM2 / [Measured integrated flux (pW) * pixsize2] 

 or more conveniently:  

FCF(beamequiv) = FCF(arcsec) x 1.133 x FWHM2 

where FWHM is 7.5'' and 14.0'' for the 450μm and 850μm respectively. This produces an FCF in units of Jy/beam/pW. 

This FCF(beamequiv) and FCF(beam) should agree with each other, however this is often not the case when the source is distorted for the reasons mentioned above. FCF(beamequiv) has been found to provide more consistent results and it is advised to use this value when possible, in the same way as FCF(beam). 

Methodology for calibrating your data: 

So you have a reduced map for a given date. Each night of S2SRO should have at least one if not more calibrator observations that were taken during the night. A website with this list is currently in the works and I'll add it to the blog when it is completed. For now it is relatively easy to search the night's observations for these. Primary calibrators were Uranus and Mars, and the  secondary calibrators are listed on the SCUBA secondary calibrators page. In addition to these, Arp 220, V883 Ori, Alpha Ori and TW Hydrae were tested as calibrators. Their flux properties were investigated with SCUBA (see SCUBA potential calibrators) .

  1. Run your selected calibration observation through the mapmaker using the same dimmconfig as your science data used.
  2. Use PICARD's recipe SCUBA2_FCFNEFD on your reduced calibration observation. This will produce information to the screen and a logfile log.fcfnefd with the three FCFs as mentioned above, and an NEFD for the observation. PICARD by default uses fixed FCF's to calculate the NEFD. (450um: 250 and 850um: 750). If you wish to get an NEFD using the FCF calculated for the calibrator add USEFCF=1 to your parameter file. 
  3. Take your selected FCF and multiply your map by it using KAPPA cmult.


Things become slightly more complicated if you wish to use PICARD's matched filter recipe to enhance faint point sources. Again see Andy's PICARD page (link above) for details on the matched filter recipe. If you are normalising the matched filter peak, you will need to run this filter over your calibrator image with the same parameters you used for your science map.

Note: You cannot, at this point, use the FCF(beamequiv) number to calibrate your match-filtered data. This number will now be (usually) disproportionate and just wrong. The FCF(beam) value however, should be preserved by this method.  

The peak is truly preserved by this method so two numbers, the FCF(beamequiv) pre-match-filter and the FCF(beam) post-match filter should be close to the same and either of these values can be used to calibrate your match-filtered science map. 

It is also worth noting (though perhaps obvious) that after running the match-filter script in peak-normalisation mode, only the peak flux values (and not the integrated sum over an aperture) will be correct. The reverse is true if using sum-normalisation.
 

2010-05-28

Dark Squids and Magnetic Field Pickup

Over the last couple of weeks I have been experimenting with some data that seem to exhibit the effects of magnetic field pickup in the detectors. The SCUBA-2 readout system amplifies small changing magnetic fields caused by changing currents in the detectors (that are produced by changing optical and thermal loads on the bolometers) using arrays of SQUIDs. Although there is substantial shielding, some external changing magnetic fields may still leak into the raw data - such as motion of the telescope through the Earth's magnetic field, or fields produced by motors in the dome etc. The readouts are multiplexed, and the signals from columns of detectors pass through a common amplifier chain. For this reason, each column has a bare SQUID with no bolometer, a so-called "dark SQUID" (DKS henceforth), to track this non-thermal/optical noise term. The time-series for these data are stored in raw SCUBA-2 data files in the ".more.scuba2.dksquid" extension. They can be plotted, for example, using KAPPA linplot (this is for the DKS in column 1 of the following raw 30s data file):

$linplot 's4a20100223_00061_0002.more.scuba2.dksquid(1,)'



Just like regular bolometer data, DKS can have spikes (like in this example). Generally speaking the DKS haven't shown a lot of structure so we weren't concentrating on using them to clean the bolometer data.

However, on some of the wider-area data sets the DKS signals can be substantially larger, and seem to introduce a significant noise term. One additional feature of the magnetic field pickup signal that makes it easy to distinguish from other noise sources is that its sign is essentially random, and independent of the detector response to optical/thermal loads, i.e. the signal could be rising in one bolometer, and falling in the next. The following movie shows a data cube from the s8d array after the common-mode signal has been removed. The residual signal oscillates in time, with two clear detector populations that are out of phase with eachother:



These data were taken as part of a 0.5 deg diameter scan, and it is clear comparing the dark squid signals with the change in telescope azimuth that they are highly correlated. Combined with the arbitrary sign of the residual noise, the culprit is almost certainly magnetic field pickup.

A dark squid signal (red dots), and the signal smoothed with 200 sample boxcar (white line) over 100s:


Azimuth offset from the map centre over the same period of time:


The next plot shows the dark squids for each working column over ~25 min. with arbitrary vertical offsets for clarity. Note that while similar, each column has a slightly different response:


The iterative map-maker has a partially-implemented model, called "DKS", for iteratively fitting and removing the DKS signals from working detectors along each column. The relevant settings for the configuration file are:

modelorder = (dks,com,gai,ext,ast,noi)
dks.boxcar = 100

Here we have added DKS cleaning as the first step in the iterative model fitting to the data (note also that we have dropped FLT for now, more on that below), and the second option smooths the DKS data with a 100 sample boxcar prior to fitting and removing it from each bolometer.

The next two images show a map solution using only COM to remove low-frequency noise, followed by a solution that uses both DKS and COM:





While not perfect, it is still clear that the DKS removal has removed a lot of the noise that caused streaking in the first image. Remember, there is no additional high-pass filtering as FLT has been turned off in this example.

If "dks" is specified in "exportndf", the fitted DKS model will be written to a file with a suffix "_dks.sdf". However, this file is not particularly easy to view and make any sense of because the dark squids and the fit coefficients are all bundled together in one large array. For this reason, we've added a new SMURF task called "sc2expandmodel" that will evaluate the model and produce a full 3d data cube with the same dimensions/units as the residual data, e.g.

$sc2expandmodel s4a20100228_00016_0002_con_dks expanded_dks

will produce an easily-viewable file called "expanded_dks". All detectors along a column will have the same shape, but the scale/offset will be different for each detector. Examining this file and comparing to the "_res" output from makemap is useful for getting an idea of how strong this magnetic field pickup is for a given data set.

All of this code is still experimental. It currently has several problems:
  • boundaries with padding/apodization will introduce steps, so padstart/padend/apod should probably be set to zero, meaning that it is not optimal for combining with FLT at this stage.
  • the DKS are not pre-processed in the same way as the bolometer data to remove spikes / steps etc. This capability will be added shortly.
  • Some DKS are clearly dead (e.g. DKS 1 in the multi-DKS plot above). Currently we don't have an automatic system for flagging/removing them. Note also that if they are bad, it will be impossible to DKS-clean working detectors along that column (so we need to decide if we will simply have to ignore them when making a map?)
Once we have worked through some of these issues, we will update the default dimmconfig files in SMURF accordingly.

2010-05-18

SCUBA-2 filter information

At our last SCUBA-2/SRO telecon, people asked to see the measured profiles for the SCUBA-2 filters.

Dan has put up some plots here.

2010-05-17

PICARD web page

I've created a new static location for all things PICARD:
http://www.oracdr.org/oracdr/PICARD

The page includes an introduction to PICARD, a list of all available recipes (with more detailed documentation) and a few hints, tips and potential gotchas. I'll keep this page up-to-date, adding new blog entries and/or sending messages to the scuba2dr mailing list as necessary.

2010-05-11

Post-processing SCUBA-2 data with PICARD

Processing raw SCUBA-2 data is done with SMURF or the pipeline (ORAC-DR). What happens after that depends on the user and their level of familiarity with particular software packages. Fortunately the SCUBA-2 software team is here to help out and come up with a series of standardized tools for performing a number of basic post-processing tasks.

Introduction to PICARD

Our tool of choice is PICARD which makes use of the existing ORAC-DR infrastructure as well as our existing knowledge of writing primitives and recipes for the SCUBA-2 pipeline. PICARD is run from the command line as follows (I'll use % as the prompt):

% picard [options] [RECIPE_NAME] [list of files to process]

For example,

% picard -log sf -recpars mypar.lis CROP_JCMT_IMAGES myfiles*.sdf

The most commonly used options are -log and -recpars (the full list of available options can be seen by running picard -h). Both of these options take additional arguments.

The "-log" option controls where the messages from PICARD are printed: "-log sf" will write messages to the terminal window and to a file called .picard_PID.log (where PID is the process ID for picard) in the output directory. To avoid creating the .picard_PID.log files, just specify "-log s".

The "-recpars" option allows the user to pass in a text file containing parameters which can be used in the given recipe. The permitted parameters are listed with the various recipes below. The format of this text file is a list of `parameter = value' entries, with the recipe name given in square brackets:

[RECIPE_NAME]
PARAM1 = VALUE1
PARAM2 = VALUE2

PICARD writes its output files to the current directory (unless the environment variable ORAC_DATA_OUT is defined in which case that location will be used).

There are currently four recipes which may be of interest:
  • MOSAIC_JCMT_IMAGES
  • CROP_JCMT_IMAGES
  • REMOVE_BACKGROUND
  • SCUBA2_MATCHED_FILTER
These recipes and their parameters are described in more detail below. More recipes will be added as the need arises and as we gain more experience in analyzing SCUBA-2 data. Interested users should update their Starlink installations to get access to these recipes.

MOSAIC_JCMT_IMAGES

Coadd the given files into a single map, taking into account the EXP_TIME and WEIGHTS NDF components. The images are combined using variance weighting and the output variance is derived from the input variances. Currently the recipe uses the KAPPA wcsmosaic task for coadding the images.

The same pixel-spreading method (and any associated parameters) is used for the data and the EXP_TIME and WEIGHTS.

Creates a single output file based on the name of the last file in the list, and with a suffix "_mos" (e.g. mylastfile_mos.sdf).

Available recipe parameters:
[MOSAIC_JCMT_IMAGES]
WCSMOSAIC_METHOD = wcsmosaic pixel-spreading method: see wcsmosaic documentation for available options (default is "nearest")
WCSMOSAIC_PARAMS = additional parameters which may be required for the chosen method

CROP_JCMT_IMAGES

Crop images to the map size in the data header (as specified in the Observing Tool), though this size can be overridden using the recipe parameters below.

Creates an output file for each input file, with the suffix "_crop".

Available recipe parameters:
[CROP_JCMT_IMAGES]
MAP_WIDTH = map width in arcsec
MAP_HEIGHT = map height in arcsec

REMOVE_BACKGROUND

Fit and remove large-scale background variations from images using either KAPPA fitsurface or CUPID findback. See the Starlink documentation on both tasks for more information on the parameters shown below. The option exists to mask out a circular region centred on the source before removing the background.

Be aware that the subtraction of the background will add noise proportional to the RMS deviation between the image and the background fit.

Creates an output file for each input file with the suffix "_back".

Available recipe parameters:
[REMOVE_BACKGROUND]
MASK_SOURCE = flag to mask out a circular region on the source before fitting a background (1 = mask out source; 0 = do not mask out source - the default)
APERTURE_RADIUS = radius of aperture (in arcsec) for masking out source (otherwise 30 arcsec)
BACKGROUND_FITMETHOD = the method for fitting the background, either fitsurface (default) or findback
FITSURFACE_FITTYPE = fittype parameter for fitsurface: polynomial (default) or spline
FITSURFACE_FITPAR = parameters for fit, up to 2 numbers corresponding to NXPAR/NYPAR for fitsurface or KNOTS for spline fit
FINDBACK_BOX = size of box in pixels used by findback for smoothing the image

SCUBA2_MATCHED_FILTER

Apply a matched filter to the data to improve point-source detectability. The images and PSFs are smoothed with a broad Gaussian (default is 30 arcsec but can be varied using the recipe parameter below) and subtracted from the originals. The images are convolved with the modified PSFs. The PSF created by the recipe is a Gaussian with FWHM equal to the JCMT beamsize at the appropriate wavelength (i.e. 7.5 or 14 arcsec).

Creates an output file for each input file with the suffix "_mf", and a PSF file "_psf" if the PSF was not specified as a recipe parameter.

Available recipe parameters:
[SCUBA2_MATCHED_FILTER]
PSF_MATCHFILTER = name of a PSF file (NDF format, will be used for all images)
PSF_NORM = switch to determine whether a PSF is normalized to a peak of unity ("peak" - the default) or a sum of unity ("sum")
SMOOTH_FWHM = FWHM in arcsec of Gaussian to smooth image (and PSF)