2011-01-18

Automatically adding fake sources to real data

Further to Ed's post on adding fake sources to SCUBA-2 maps, I've added some new functionality to the pipeline which allows this process to be automated.

There is a new recipe called REDUCE_SCAN_FAKEMAP which should be specified on the command-line when running the pipeline. (Normally the recipe is chosen automatically.) This recipe behaves in exactly the same way as the standard recipe, and just passes the given `fake' map (adjusted and scaled as necessary) to the map-maker.

The recipe has a small number of parameters which can be given in addition to those already accepted by the pipeline (e.g. an alternative makemap configuration file). These parameters are all prefixed with "FAKEMAP_":

[REDUCE_SCAN_FAKEMAP]
FAKEMAP_MAP = mymap.sdf
FAKEMAP_SCALE = 0.5
FAKEMAP_REGRID = 1/0
FAKEMAP_OFFSET = dx,dy

The name of the base input map is mandatory (otherwise the pipeline exits immediately). The remaining parameters are optional. The SCALE parameter is the same as the "fakescale" parameter used by the map-maker and defaults to 1 if not given. Two further parameters control how the input map is adjusted before adding to the raw timeseries data.

The OFFSET parameter is a pair of RA,Dec offsets in arcsec which allow the input map to be shifted so that it doesn't coincide with a source in the SCUBA-2 data. The default is not to shift the input map. The offsets are applied to the source: i.e. a shift of 60,60 puts a source originally at (02:30:00.0, 00:00:00.0) at (02:30:04.0, +00:01:00.0) in the output map.

The REGRID parameter is a flag which, if true, regrids the input map to match the pixel coordinates of the output map (i.e. same pixel bounds and pixel size). The default is false, unless an OFFSET is given: in this case the input map must be aligned (regridded) to match the output map so that the pixel bounds are correct. Note that the original input file is not modified: the pipeline makes a copy and operates on that.

Here's an example of calling this recipe on the command line (processing a list of files contained in mydata.lis):

% oracdr -loop file -files mydata.lis -recpars myrecpars.ini REDUCE_SCAN_FAKEMAP

This new capability is available now with an update of your Starlink installation. Please try it out and let me know how well it worked (or didn't!), or send a message to the scuba2dr mailing list.

2011-01-11

Correcting for pointing drifts with PICARD

If you're lucky enough to have a bright source in your map, you may be able to improve the image fidelity by correcting for pointing offsets between observations. The pipeline now has this capability and it is applied automatically for calibrators, but not for most science targets. I've added a new PICARD recipe, called SCUBA2_REGISTER_IMAGES, which allows you to shift your images to line up at a common reference position before mosaicking. Here's how to do it.

First step, upgrade to the latest Starlink version via rsync.

Step two, create a recipe parameter file (say, params.ini) and add the following lines to it:

[SCUBA2_REGISTER_IMAGES]
REGISTER_IMAGES = 1
REGISTER_RA = HH:MM:SS.S
REGISTER_DEC = +DD:MM:SS.S

where "HH:MM:SS.S" and "+DD:MM:SS.S" are the RA and Dec of the source you wish to use.

Then run

% picard -log sf -recpars params.ini SCUBA2_REGISTER_IMAGES myfiles*.sdf

and if all goes well, you will end up with a series of output files, one for every input file, which end in "_reg.sdf". These files can then be coadded using MOSAIC_JCMT_IMAGES.

The recipe works by fitting a 2-d profile to the brightest source within a 2x2 arcmin2 box centred on the given reference position. The WCS is adjusted to place the reference position at the position reported by the fit.

However, note that this method will only work if the source you're using is present in every image, is compact and clearly resolved from other nearby sources and is the brightest source within the 2x2 arcmin2 search box in every input image. It won't work for deep extragalactic maps, nor does it correct for any pointing drifts that occur within an observation. Correcting for those drifts is much more involved, but it is something we'll investigate over the coming months. It's been tested on HL Tau, OMC-1 and AFGL4029 in W5-East.

It's also possible to do the registering and coadding with a single call to MOSAIC_JCMT_IMAGES. In this case you'd want to add the following to a recipe parameter file (in addition to any other coadding parameters):

[MOSAIC_JCMT_IMAGES]
REGISTER_IMAGES = 1
REGISTER_RA = HH:MM:SS.S
REGISTER_DEC = +DD:MM:SS.S


Here's an example using some observations of HL Tau data taken in December 2009. The corrections were as much as 8.5 arcsec in RA (the Dec corrections were less, up to 3 arcsec). The improvements are modest at 850 um (of order 10% increase in peak flux and a few per cent reduction in beam size) but much more significant at 450 um. The peak flux after correcting for pointing drifts is more than 20% higher than before, and the FWHM has decreased by 15% (9.2" to 7.9"). Furthermore, the source elongation (ratio of major and minor axes) is reduced from 16% to less than 3%. Before and after images are shown below.

Left: HL Tau at 450 um, coadd of 9 separate observations from 20091213 and 20091214. Right: same data after registering each image before coadding. Both images are scaled between the same limits.

2010-11-29

SCUBA-2 at the ADASS conference

At the start of November I attended the ADASS conference in Boston and presented a poster paper on SCUBA-2 data processing. Readers of this blog probably won't find anything new in here but just in case, the conference paper itself is now on astro/ph at http://arxiv.org/abs/1011.5876. Thanks to Dave Nutter for supplying the Orion B image.

2010-10-27

Adding fake sources to real data

Until now it has been difficult to characterize the ability of SMURF to reduce maps of known sources. Certainly a basic simulator has been available since before SCUBA-2 began to take real data, but no effort to date has been put into updating the noise model to accurately reflect the real instrument.

An alternative is to simply add signal from synthetic astronomical sources to real time-series. This feature was relatively easy to add and can be quite versatile. The user simply specifies an external 2-dimensional map using the new configuration parameter fakemap to makemap, containing the simulated sources. There are several things to be aware of when using this facility:
  • the map of fake sources must have the same pixel dimensions as the output map
  • the units are the same as the output map from makemap (normally pW if working from raw data)
  • a scale factor can be applied to the map using the fakescale configuration parameter
The following two examples illustrate how this feature might be used.

Response to a point source

In this example we create a beam map using a scan of Uranus (s4a20091214_00015), and add it to the time-series for a scan of a blank field (s4a20100313_00029) to establish the effect of map-making and matched-filtering on point sources. Both of these data sets are public (See SC/19 and this web page).

First, maps of the blank field and Uranus are both reduced:

makemap s4a20100313_00029\*.sdf blank450 method=iterate config=^$STARLINK_DIR/share/smurf/dimmconfig_blank_field.lis

makemap s4a20091214_00015\*.sdf uranus_450 method=iterate config=^$STARLINK_DIR/share/smurf/dimmconfig_bright_compact.lis

Then we cut the central region out of the Uranus map and copy it into a map (using KAPPA:NDFCOPY) with the same pixel dimensions as the blank450.sdf so that we can use it to provide fake signal:

ndfcopy 'uranus_450(3~31,1~31)' fakeblank450 like=blank450

We create a modified dimmconfig that instructs makemap to add in this additional source of signal, but applying a scale factor such that the amplitude of Uranus is 0.01 pW:

^/stardev/share/smurf/dimmconfig_blank_field.lis

fakemap = fakeblank450

fakescale = 0.0471865

We run makemap using this new config:

makemap s4a20100313_00029\*.sdf blank450_fake method=iterate config=^dimmconfig.lis

Finally, we produce match-filtered maps for both the original map, and the map with the fake source added in:

picard SCUBA2_MATCHED_FILTER blank450

picard SCUBA2_MATCHED_FILTER blank450_fake

The following image shows all four maps. The left column contains the raw output of makemap for the original map, and the map with the source added in. The right column shows the corresponding match-filtered images.



The input source had a known amplitude of 0.01 pW. In the makemap output, however, we can see that the filtering during map-making has reduced the amplitude by about 12% to 0.0088. The peak in the match-filtered image, however, is not attenuated quite as severely, at 0.0096. In addition to the peak response, this test also clearly shows the effective PSF for the map: there are faint negative streaks along the scan directions caused by the high-pass filtering. This example suggests that upward corrections to the FCF of order 5 to 10% should be applied to maps of points sources using dimmconfig_blank_field.lis (although this factor should be established on a case-by-case basis).

Response to extended structures

It is known that maps produced with SMURF do not have useful information on scales larger than roughly the array footprint due to the common-mode rejection, and high-pass filtering. Using the fakemap facility, we can now add sources with complicated structures to SCUBA-2 data to see this effect quantitatively.

In this example we use an 850um scan of Orion (
s8d20100216_00022), and to it we add some diffuse structure from a publicly available map of Vela at 350um from the Balloon-borne Large-Aperture Submillimeter Telescope:



Similar to the previous example, we first reduce the map of Orion (using dimmconfig_bright_extended.lis),



and then we reduce it again using the BLAST image as a fakemap (positioning it and padding it to the correct size using KAPPA:NDFCOPY) and scaled to a similar signal range as the actual structure in Orion:



Comparing the upper-left region of this map where the Vela data were added with the original BLAST data, it is clear that all of the compact structures are faithfully identified, but the extended structure is not. Since we know what those underlying structures are, we can simply difference the input and output maps to see what we are missing in the SMURF reduction:



For reference, the upper-centre green blob is about 3 arcmin across, of order the array footprint. The smaller-scale dark features at the right are simply an artifact of the real Orion 850um emission that extends into the patch of the map where we inserted the Vela data!

2010-10-26

Save time and memory: down-sampling your data

I've finally gotten around to adding the ability to "down-sample" the data.

SCUBA-2 data are normally sampled at about 200 Hz (actually closer to 180 Hz). This specification was based on the desire to scan up to 600 arcsec/sec while still fully-sampling the 450um beam. With these rates, a single sample covers an angular scale on the sky of 600/200 = 3 arcsec, which corresponds to approximately the 450 FWHM/3.

For most of the observations taken as part of SRO, however, typical scan rates were in the range 100--200 arcsec/sec, meaning that SCUBA-2 samples faster than necessary. We can then, in theory, re-sample the data into longer samples without losing any useful information. SMURF can now do this by specifying the smallest scale you wish to sample in the configuration file:

450.downsampscale = 2

850.downsampscale = 4

These values are measured in arcsec, and by setting to these values you will match the default pixel resolution used by SMURF in each band. It will then determine internally, based on the previous sample rate and mean slew velocity, how much to downsample the data.

The following example is the default reduction of an Orion map at 450um (20100219_00039) using the dimmconfig_bright_extended.lis configuration file:



And then again, except setting 450.downsampscale = 2:



As you can see there is very little difference between the two images.

As makemap is running you will notice:

smf_grp_related: will down-sample from 174.6 Hz to 60.0 Hz

Indicating that the data are being compressed by a factor of about 3. In terms of execution time on my 8-core machine it drops from 18 s to 11 s. The reason it is not a full factor of 3 is because there is a large initial start time as the original data (before down-sampling) are loaded in. Finally, the estimated memory usage drops from 457 MiB to 251 MiB (again, not a full factor of 3 due to other static memory that needs to be loaded).

This test uses a very small data set. The improvements are more pronounced for long scans in which the total memory usage is dominated by the time-series data, and not other statically allocated buffers (such as the map etc.). There will also be larger improvements at 850 um than at 450 um (because it is lower resolution), and for slower scan speeds.

2010-10-11

This blog

You may have noticed that the posting rate has picked up quite a bit in the last few months, as we've been trying to keep everybody up to date with externally visible developments to the JCMT Science Archive.

If you are a reader for this blog, please take a moment to leave a comment and let us know  if you would like more/fewer posts, whether the level of detail is too little/too much,  the kind of topics you'd like covered, or anything else that you would like to share.

S2SRO products - new re-reduction

The S2SRO products have been regenerated once again. In this batch you will find

  • Much improved map for extended emission targets (which now are specially handled)
  • Improved step-finder
  • S/N masking for bright sources to prevent negative bowling
  • Point source observations now reduced with the science recipe rather than the summit "quickie" recipe.

As usual you will find your SCUBA-2 products in the proprietary processed data search - just put in your project code and choose multi-night to get your total product. 

(Note that this  batch does not include the map-based despiking - that will be in the next run).