2011-11-16

Supplying an externally-generated zero mask

In an earlier post I described zero-masking , in which the map is forced to a value of zero in certain locations to help with convergence (i.e. reduce saddles, negative bowls around sources etc.). The two methods that have been available until now are: defining circular regions beyond which the map is set to zero (good for known compact sources); and using a threshold on the local SNR (an automatic way to identify source and blank sky).

Since SMURF still handles non-continuous pieces of data independently (i.e. multiple maps of the same field), the SNR of an individual map may not be sufficiently large to produce a good mask of source / blank sky, whereas the combination of all data sets do. For those cases, and other examples where you may know, a priori, where to expect emission, a new facility has been added so that the user can define a zero mask externally and then provide it to SMURF.

In this illustrative example, I use one of the OMC-1 maps from 2010 with the s4a array: obs 18,22,23 on 2010216 and obs 27,29,33,34 on 20100218. First, we make a map using the bright_extended configuration (which uses SNR-based zero masking; I also use a lot of down-sampling and big map pixels to speed things up for this example):

makemap ^names_450.txt map_450_be pixsize=8 \
method=iterate \
config='"^/stardev/share/smurf/dimmconfig_bright_extended.lis"'

The resulting image looks reasonably good, but there are obvious negative bowls around the bright extended sources. However, the SNR of the final image is quite good, so I create a new mask based by thresholding the SNR map (4-sigma), and then smoothing it with a 5-arcsec FWHM Gaussian to spread out from where the sources are slightly:

makesnr map_450_be map_450_be_snr

thresh map_450_be_snr mask_450 thrlo=4 newlo=0 \
thrhi=4 newhi=1


gausmooth mask_450 mask_450_sm fwhm=5


thresh mask_450_sm zero_mask_450 thrlo=0.05 newlo=bad \
thrhi=0.05 newhi=1


I can then feed this new "zero_mask" back into makemap and re-run the data:

makemap ^names_450.txt map_450_zm pixsize=8 \
method=iterate \

config='"^/stardev/share/smurf/dimmconfig_bright_extended.lis,itermap=1,ast.zero_mask=1,ast.zero_snr=0"' \
ref=zero_mask_450



The key things to note are: (i) I turned off the SNR thresholding, ast.zero_snr, that is the default for the bright_extended configuration, and (ii) the new zero mask is provided as the REF image (also ensuring that the map and mask are on the same pixel grid), and to use it we set ast.zero_mask=1.

In the following images I show (from top-left): (i) an arm sticking out towards the north of OMC-1 using the default bright_extended reduction; (ii) the same region after using the updated zero-mask; (iii) the first map subtracted from the second map to illustrate the improved response to large-scale structure; (iv) the original combined zero mask based on the SNR of map pixels in individual scans; (v) the new zero mask based on thresholding/smoothing the combined SNR map:

2011-03-31

Automatically adding fake sources to real data, part 2

In a previous entry, I described how to go use the new ORAC-DR recipe REDUCE_SCAN_FAKEMAP to automatically add fake sources to the raw time series in order to get a handle on the transfer function of the map-maker. That's all very well, but how about starting with something simpler, such as a Gaussian? Now you can add a Gaussian of given amplitude and FWHM without first creating an image to feed in to the pipeline. As before, the Gaussian properties are defined using recipe parameters:

[REDUCE_SCAN_FAKEMAP]
FAKEMAP_FWHM = 15
FAKEMAP_SCALE = 0.5
FAKEMAP_OFFSET = dx,dy

The FWHM is in arcsec and is the only mandatory parameter (the SCALE and OFFSET are optional). In this case, the FAKEMAP_SCALE parameter is the amplitude of the Gaussian in Jy/beam. The Gaussian can be positioned anywhere in the map by giving an offset (RA, Dec in arcsec).

The pipeline can help you out further by using a couple of useful defaults. If the world "beam" is given as the FWHM, then the pipeline will use the default beam size appropriate for the current wavelength (14" at 850 um, 8" at 450 um). If the amplitude is not specified then (somewhat arbitrary) defaults of 1 and 4 Jy/beam are assumed (at 850 and 450 um respectively).

Note that the FWHM must be greater than half the pixel scale (the pipeline will set it to that value if the given value is less), but in principle there is no upper limit. In practice the upper limit is defined by the array footprint (about 2.5 arcmin for S2SRO data): anything much larger than that will be removed by makemap as part of fitting the common mode.

As usual, update your Starlink installation (with rsync) to try it out.

2011-03-18

Recipe Parameters and the Science Archive

When processing jobs are submitted to the science archive ORAC-DR is configured to load a specialist recipe parameter file associated with the project. This can be used by PIs or surveys to tune processing to their liking. The idea is that PIs or survey teams send us their recipe parameter files and we then make sure that the processing jobs use them.

This week I've updated the recipe parameter system to allow tuning based on the object name as well as the recipe name. In some projects the objects are completely different so a single parameter file was not useful. Hopefully this change will encourage more people to send us parameter files.

To use the object based scheme simply append the object name to the recipe name in the file along with a colon. No spaces in the object name.

  [REDUCE_SCAN:M82]
  PAR1 = A
  PAR2 = B

If there is also a recipe-based entry those parameters will be merged in

  [REDUCE_SCAN]
  PAR1 = Aprime
  PAR3 = C

So in this example if the object name is "M82" PAR1, PAR2 and PAR3 will be set and PAR1 will have a value "A". If the object is no "M82" only PAR1 and PAR3 will be set and PAR1 will have value "Aprime".

This change is currently on the stardev rsync server, or if you have git installed you can update your own ORAC-DR distribution.

2011-01-18

Automatically adding fake sources to real data

Further to Ed's post on adding fake sources to SCUBA-2 maps, I've added some new functionality to the pipeline which allows this process to be automated.

There is a new recipe called REDUCE_SCAN_FAKEMAP which should be specified on the command-line when running the pipeline. (Normally the recipe is chosen automatically.) This recipe behaves in exactly the same way as the standard recipe, and just passes the given `fake' map (adjusted and scaled as necessary) to the map-maker.

The recipe has a small number of parameters which can be given in addition to those already accepted by the pipeline (e.g. an alternative makemap configuration file). These parameters are all prefixed with "FAKEMAP_":

[REDUCE_SCAN_FAKEMAP]
FAKEMAP_MAP = mymap.sdf
FAKEMAP_SCALE = 0.5
FAKEMAP_REGRID = 1/0
FAKEMAP_OFFSET = dx,dy

The name of the base input map is mandatory (otherwise the pipeline exits immediately). The remaining parameters are optional. The SCALE parameter is the same as the "fakescale" parameter used by the map-maker and defaults to 1 if not given. Two further parameters control how the input map is adjusted before adding to the raw timeseries data.

The OFFSET parameter is a pair of RA,Dec offsets in arcsec which allow the input map to be shifted so that it doesn't coincide with a source in the SCUBA-2 data. The default is not to shift the input map. The offsets are applied to the source: i.e. a shift of 60,60 puts a source originally at (02:30:00.0, 00:00:00.0) at (02:30:04.0, +00:01:00.0) in the output map.

The REGRID parameter is a flag which, if true, regrids the input map to match the pixel coordinates of the output map (i.e. same pixel bounds and pixel size). The default is false, unless an OFFSET is given: in this case the input map must be aligned (regridded) to match the output map so that the pixel bounds are correct. Note that the original input file is not modified: the pipeline makes a copy and operates on that.

Here's an example of calling this recipe on the command line (processing a list of files contained in mydata.lis):

% oracdr -loop file -files mydata.lis -recpars myrecpars.ini REDUCE_SCAN_FAKEMAP

This new capability is available now with an update of your Starlink installation. Please try it out and let me know how well it worked (or didn't!), or send a message to the scuba2dr mailing list.

2011-01-11

Correcting for pointing drifts with PICARD

If you're lucky enough to have a bright source in your map, you may be able to improve the image fidelity by correcting for pointing offsets between observations. The pipeline now has this capability and it is applied automatically for calibrators, but not for most science targets. I've added a new PICARD recipe, called SCUBA2_REGISTER_IMAGES, which allows you to shift your images to line up at a common reference position before mosaicking. Here's how to do it.

First step, upgrade to the latest Starlink version via rsync.

Step two, create a recipe parameter file (say, params.ini) and add the following lines to it:

[SCUBA2_REGISTER_IMAGES]
REGISTER_IMAGES = 1
REGISTER_RA = HH:MM:SS.S
REGISTER_DEC = +DD:MM:SS.S

where "HH:MM:SS.S" and "+DD:MM:SS.S" are the RA and Dec of the source you wish to use.

Then run

% picard -log sf -recpars params.ini SCUBA2_REGISTER_IMAGES myfiles*.sdf

and if all goes well, you will end up with a series of output files, one for every input file, which end in "_reg.sdf". These files can then be coadded using MOSAIC_JCMT_IMAGES.

The recipe works by fitting a 2-d profile to the brightest source within a 2x2 arcmin2 box centred on the given reference position. The WCS is adjusted to place the reference position at the position reported by the fit.

However, note that this method will only work if the source you're using is present in every image, is compact and clearly resolved from other nearby sources and is the brightest source within the 2x2 arcmin2 search box in every input image. It won't work for deep extragalactic maps, nor does it correct for any pointing drifts that occur within an observation. Correcting for those drifts is much more involved, but it is something we'll investigate over the coming months. It's been tested on HL Tau, OMC-1 and AFGL4029 in W5-East.

It's also possible to do the registering and coadding with a single call to MOSAIC_JCMT_IMAGES. In this case you'd want to add the following to a recipe parameter file (in addition to any other coadding parameters):

[MOSAIC_JCMT_IMAGES]
REGISTER_IMAGES = 1
REGISTER_RA = HH:MM:SS.S
REGISTER_DEC = +DD:MM:SS.S


Here's an example using some observations of HL Tau data taken in December 2009. The corrections were as much as 8.5 arcsec in RA (the Dec corrections were less, up to 3 arcsec). The improvements are modest at 850 um (of order 10% increase in peak flux and a few per cent reduction in beam size) but much more significant at 450 um. The peak flux after correcting for pointing drifts is more than 20% higher than before, and the FWHM has decreased by 15% (9.2" to 7.9"). Furthermore, the source elongation (ratio of major and minor axes) is reduced from 16% to less than 3%. Before and after images are shown below.

Left: HL Tau at 450 um, coadd of 9 separate observations from 20091213 and 20091214. Right: same data after registering each image before coadding. Both images are scaled between the same limits.