Showing posts with label calibration. Show all posts
Showing posts with label calibration. Show all posts

2014-07-08

Calibrating SCUBA-2 data in surface brightness units

In most cases the default calibration for SCUBA-2 data processed by the ORAC-DR pipeline is mJy beam-1. The exception is the recipe for extended sources, REDUCE_SCAN_EXTENDED_SOURCES, which calibrates data in mJy arcsec-2.

Unfortunately there was an error in an earlier version of this recipe which meant that the FCF was applied incorrectly. The corrected method is available now with an update of ORAC-DR (either from github or via rsync from JAC). If you have data processed with this recipe (either by running it yourself, or downloading processed products from CADC) then re-calibrating the data is easy: simply divide by the pixel area using KAPPA cdiv.

There is a new PICARD recipe for easy calibration of maps produced by running makemap by hand. CALIBRATE_SCUBA2_DATA allows data to be calibrated in in per-beam and surface brightness units. With no parameters, this recipe will calibrate data in mJy beam-1. For surface brightness calibration, set the recipe parameter USEFCF to 1 and FCF_CALTYPE to ARCSEC, and the recipe will then use the default ARCSEC FCF for the wavelength of the given data.

The recipe can also convert the calibration from one type to another. If your data are already calibrated in mJy beam-1, they can be given to CALIBRATE_SCUBA2_DATA with the FCF_CALTYPE recipe parameter above, and the recipe will create a new file (with suffix _cal) with units of mJy arcsec-2. The value and units of the FCF are written into the FITS header of the calibrated file.

The companion recipe, UNCALIBRATE_SCUBA2_DATA, will undo the current calibration, reverting the units to pW in the output file (which has a suffix of _uncal).

Using ORAC-DR or PICARD to perform the (un)calibration is preferred to simply multiplying your data by the FCF as they also set the units correctly for the output files(s), and write the value of the FCF used into the FITS header of the file.

However, there is one note to highlight: the recommended way to calibrate data (either from raw or when changing from per beam to per square-arcsec) is to calibrate the individual observations first, and then coadd those (re)calibrated files. Calibrating or re-calibrating coadds will fail because the coadding step was recently updated to remove FITS header entries that differ between the input files. These usually include the UTDATE which is used by the ORAC-DR calibration system. A future upgrade will provide a workaround though the recommendation to calibrate individual observations stands.

2014-01-16

Updates to SCUBA2_CHECK_CAL: 2-component fitting

Happy New Year to everyone reading this (or Hau‛oli Makahiki Hou as we say in Hawai‛i), and welcome to 2014.

The PICARD recipe SCUBA2_CHECK_CAL has been updated to use a two-component gaussian profile to fit the beam when determining an FCF from a calibration observation, as long as the signal-to-noise ratio exceeds 100. Previously, a single-component fit was used which was not constrained to be a gaussian. (If the signal-to-noise ratio is not high enough it will fall back on the old behavior.) This two-component fit is based on the FWHM and relative amplitudes of the two components derived in Dempsey et al. (2013).

This change has no effect on the ARCSEC FCFs, but results in a small (~1%) but consistent reduction in the BEAM FCFs. The effect should be small enough to be negligible, and with this change we now have slightly higher confidence in the resulting FCFs than with the old one-component fits. BEAMMATCH FCFs are likewise unaffected by the change, as testing showed they were best fit with profiles not forced to be gaussian.

You can set the behavior of SCUB2_CHECK_CAL manually using the FIT_GAUSSIAN parameter in your config file. The default value of 2 uses a two-component gaussian fit (when S/N > 100), a value of 1 uses a one-component gaussian fit, and a value of 0 recovers the old behavior of a one-component fit not constrained to be a gaussian.

At the moment, to use this feature you will need to update your Starlink to the current development version. Linux users can simply rsync Starlink from JAC following the instructions at http://starlink.jach.hawaii.edu/starlink/rsyncStarlink

2013-03-27

New OMP features for projects.

The OMP has recently been updated to display several additional pieces of information on the project pages.

Tau graph

The first thing you will notice is a new graph of tau over the course of the night. Previously, the pages had such a plot that was generated each time the page was loaded. This took a while, and the plot had automatically generated limits on the y-axis which made comparing graphs between two night effectively impossible.

The new graph is made with consistent x- and y-limits that allows for easy comparison between nights. The various weather grades are also delineated, and the value of tau from the CSO WVM is plotted as well.

The new plot of tau vs. time. Click an any picture for a larger view.
The thicker gray horizontal lines running across the graph mark the weather grade boundaries, and the shaded area shows the night hours in Hawaii. The time in UTC is plotted along the bottom axis, with the time in HST along the top.

Pie chart

The pie chart of time spent in each grade.
There is also a pie chart that sums up the amount of time spent in each weather grade throughout the night. The grades are color-coded from green to red to give a quick visual assessment of how good a given night was (green good; red bad). Usually a given night will fall predominately within just one or two grades.


ACSIS standards table

For nights on which data was taken using ACSIS, there will be an HTML table in text form like the example below with some information relating to the ACSIS standards observed.

Obs # Time Integ. Int. Peak Int.
16 19:36:32 251.33 8.82
19 19:50:31 4775.56 6.88
27 20:59:30 4157.76 5.96
36 22:22:36 3520.79 5.08

SCUBA-2 calibrations table

Similarly, for nights where SCUBA-2 was used there will be an HTML table like the example below listing the FCFs for each observation of a calibration object.

20120822 FCFasec FCFpeak
Obs # Time (UT) Source 850µm err 450µm err 850µm err 450µm err
8 05:41:27 CRL2688 2.41 0.01 4.64 0.03 572.2 1.5 542.9 5.7
37 10:24:23 CRL2688 2.30 0.01 4.66 0.02 513.5 1.2 466.0 3.7
68 17:26:48 CRL618 2.23 0.01 4.28 0.04 487.8 1.4 436.8 4.8

Along with the table on SCUBA-2 nights there will three additional graphs, which will be detailed below. Each of these graphs has two sub-plots, the top one being the 450 micron data and the bottom one being the 850 micron.

NEPs vs observation number graph


This graph shows the min, max, and mean for each of the eight subarrays that make up SCUBA-2, plotted by observation number. The mean is the colored line, and the shaded areas are the filled-in areas between the min and max. By following the lines you can tell which arrays changed significantly, and by watching the shaded areas you can get a feel for the spread in the NEPs. The vertical scale is fixed, and is the same as the scale for the following plot.

NEPs vs time graph

This graph, like the previous one, shows the NEPs, but plots them against time rather than observation number. Each of the eight subarrays is present once again (with the same colors as in the preceding plot). The time (in UTC) is marked along the bottom of the plot, and the observations are marked by number along the top, with vertical lines that descend to help mark when each particular observation began.

NEFDs vs time graph


The final plot is a plot of the NEFDs vs. time. The number of points depends on the night, this particular night only had a few. Like the two previous plots, the vertical axis is fixed to make comparisons between nights easier.

Hopefully these new features will prove useful to users of the OMP, and additional updates or improvements may be forthcoming in the future. Feedback on the new features is welcome.

2012-08-16

Updates to FCFs and extinction correction

Over the past couple months the flux conversion factors and extinction corrections have been updated and we are now very happy with the answers. The main issue from the user perspective is that the FCF you apply critically depends on which version of SMURF was used to generate the map. At the time of writing this data products downloaded from CADC use an older extinction correction than the value you will find in place for the current SMURF. We plan to update CADC processing shortly but reprocessing the historical archive will take some time.

We have set up a web page at JAC listing the parameters that should be used and instructions on how to determine which version of the software was used to generate your map:

http://www.jach.hawaii.edu/JCMT/continuum/scuba2/scuba2_relations.html

2012-02-03

SCUBA-2 Calibration: REDUX.

The short story:

  • The heater coupling factors have been adjusted to more realistic values. In practice, this does not change the performance of the instrument - however it does change the absolute value of the FCFs. These values were adjusted in the software in mid-December.
  • The WVM tau algorithm has been fixed and improved. This will not affect you directly: though the nightly plots now look extremely good and are officially used for weather band determination.
  • This has allowed new, and better calculation of the relation between the 225GHz tau derived from the WVM and the opacities at the two SCUBA-2 filter-bands. They are now as follows:
TAU_[850] = 4.6 * (TAU_[225] - 0.0043)
TAU_[450] = 26.0 * (TAU_[225] - 0.019)
  • The FCFs (flux conversion factors) have been derived for both wavelengths from an extensive reduction of calibrator sources observed over eight months of SCUBA-2 commissioning and science verification observations. They are as follows:
850um:
FCF_[arcsec] = 2.42 +/- 0.15 Jy/pW/arcsec**2
FCF_[peak] = 556 +/- 45 Jy/pW/beam
Beam area = 229 arcsec**2

450um:
FCF_[arcsec] = 6.06 +/- 0.32 Jy/pW/arcsec**2
FCF_[peak] = 606 +/- 55 Jy/pW/beam
Beam area = 97 arcsec**2


  • Reminder on how to calibrate your data:

Other posts discuss how best to reduce your data (and what recipes are needed). The latest software releases (since January 2012) all include extinction correction (with the relations above) and the changed coupling factors. If you reduced your data prior to this, you will need to reduce them again to account for these changes. Applying the FCFs reported here to old reductions of your data will be wrong.
  • The arcsec FCF: (when you want integrated fluxes)

The arcsec FCF is the factor by which you should multiply your map if you wish to use the calibrated map to do aperture photometry.

  • The peak FCF: (the FCF-formerly-known-as-beam):

This FCF is the number by which to multiply your map when you wish to measure absolute peak fluxes of discrete sources.


The whys and the wherefores (the gory details):

  • Reductions of the calibration observations:

Uranus and Mars were used as the primary calibrators for these results. In addition, CRL 618, CRL 2688 were also predominant secondary calibrators. All of the calibrators were reduced in January 2012 using the updated dimmconfig_bright_compact.lis. The improvements in the recipe (mostly in how it chooses to stop iterating) have resulted in extremely flat maps with nearly no evidence of the 'bowling' seen around strong sources during S2SRO reductions. To improve the accuracy of the peak-fitting and aperture photometry, the maps were reduced with 1 arcsecond pixels at both wavelengths.

Once the maps were reduced they were then analysed using the PICARD script SCUBA2_FCFNEFD. There have been some changes to this script: some few bugs have been fixed, the average FCF's have been adjusted, as well as the (now emprically derived) beam area, and a few reference fluxes have been adjusted. FCF_beamequiv has been removed entirely, and all calculations of integrated values are now done using AUTOPHOTOM, with a defined aperture and annulus for background subtraction.

Following extensive analysis of the most optimal parameters, all calibrations were reduced using a 60" diameter aperture (at both wavelengths) with an annulus between 90" and 120" from the source position.


  • Questions? Let's provide answers to a few we've already seen:


  • "These FCFs are very different to the old numbers quoted!"

Yes they are. The heater coupling factor change and the new tau relations play a significant part in this. But in addition, the large sample of observations has allowed for a much more accurate determination of the beam area. At 450um in particular, optical effects of the telescope show that the error beam is large and the beam is not gaussian. This results in an effective FWHM that is much broader than the 7.5" quoted previously (though that is the approximate FWHM of the fit to the centre of the beam) - it is more like 9.5", taking into account the error beam. Therefore, the measured (and fitted) peak is relatively lower, requiring a higher FCF to calibrate the peak flux in your data.


  • "There is a lot of scatter when I calculate the 'beam' or peak FCFs for my calibrators (particularly at 450um)"

No kidding. Peak values are obtained either from reading off the peak of the map (in gaia or by another method) or by fitting to the peak using beamfit (as is done in PICARD). The beam shape (particularly at 450um) can be extremely susceptible to changes in focus and atmospheric instability, amongst other things. The integrated value (FCF_arcsec) is more robust against such changes. If you are measuring a peak fit from a calibrator and see a strong deviation from the expected value things to check are:

- how 'focussed' does the image look? If you see distortion in the shape of a source that should be point-like, or distortion or 'shoulders' in the beam then it is likely that the peak value will be unreliable.

- was the observation taken early in the evening? Focus and atmospheric effects are known to be worst in the early evening hours and sometimes in the morning after sunrise. If you are looking at calibrators, try and look at ones taken later in the night and see if there is improvement.

A 'trap' has been set in PICARD to warn you if the attempted fit to the peak misses the actual peak value by more than 10%. Looking at the fit to the shape also helps in this instance. In any case, the quoted peak FCF value at the top of the post is derived from the arcsec FCF and the empirical beam area derived from nearly 500 observations at both wavelengths and this number has been shown to be robust.


  • "How stable are these FCFs? (read: do I need to reduce my own calibrators?)"

Very. The absolute errors at both wavelengths are within 5% and no significant trends have been seen in the last six months. Instrument performance is being monitored very closely and any deviations are likely to be noted specifically. However, we do not discourage you taking calibrators from the nights your data was taken and reducing them yourself - we appreciate the sanity checks! Another handy rule: if you do it to your data, do it to your calibrator. If you have specific methods you plan to use on your data, apply the same methods to your calibrator in order to ensure your calibration is correct. We are now happy to say though, that these FCFs look stable and correct, so using these numbers should provide you with well-calibrated data.


  • "What happened to FCF_beamequiv?"

FCF_beamequiv is a seductive, evil little value that tempted us to stray to the dark side, albeit temporarily. In essence it was created to use as a comparison to SCUBA performance, but should never have been used to actively calibrate SCUBA-2 data as it assumed a perfect gaussian beam. The statements above explain that this is patently untrue, especially at 450um. The beamequiv number was quoted previously, and incorrectly, as the true FCF, and it is largely the reason that the new (and correct) numbers seem so much larger. We have banished it from PICARD and it shall now be known as the FCF-that-shall-not-be-named.





2010-06-04

Extinction correction factors for SCUBA-2

Analysis of the SCUBA-2 skydips and heater-tracking data from the S2SRO data has allowed calculation of the opacity factors for the SCUBA-2 450μm and 850μm filters to be determined.

Some background: the Archibald et al (2002) paper describes how the CSO(225GHz) tau to SCUBA opacity terms were determined for the different SCUBA filters. It was assumed for commissioning and S2SRO that the new SCUBA-2 filters were sufficiently similar to the wide-band SCUBA filters that these terms could be used for extinction correction. For reference the SCUBA corrections were:

Tau(450μm) = 26.2 * (Tau(225GHz) - 0.014)

and

Tau(850μm) = 4.02 * (Tau(225GHz) - 0.001)

The JCMT Water vapour radiometer (WVM) now is calibrated to provide a higher-frequency opacity value which has been scaled to the CSO(225GHz) tau. The WVM (not the CSO 225GHz tipper) data was used for this analysis.

The new filter opacities as determined by the skydip data are as follows:


Tau(450μm) = 19.04 * (Tau(225GHz) - 0.018)

and

Tau(850μm) = 5.36 * (Tau(225GHz) - 0.006)


A follow-up post to this will show analysis of the difference applying the new corrections can make to data combined from multiple observations taken in differing extinction conditions.

It is worth noting that if an individual science map and corresponding calibrator observation is already reduced with the old factors (and your source and calibrator are at about the same airmass and if the tau did not change appreciably), any errors in extinction correction should be cancelled out in the calibration.


Applying FCFs to calibrate your data

Calculating SCUBA-2 Flux Conversion Factors (FCF's)

Currently SCUBA-2 reduction software: the pipeline and the PICARD recipes produce three separate FCF values. Details of the PICARD recipes can be found on Andy's PICARD page.

For calibration from point sources the FCFs and NEFD's have been calculated as follows:
  1. The PICARD recipe SCUBA2_FCFNEFD takes the reduced map, crops it and runs background removal (and surface fitting parameters are changable in the parameter file).
  2. It then runs the Kappa beamfit program on the specified point source. Calibrators such as CRL618, HLTAU, Uranus and Mars are already hard-coded into the recipe. If it is not, then you can add a line to your parameter file with the known flux: FLUX_450 = 0.050 or FLUX_850=0.005 for example. Beamfit will calculate the peak flux, the integrated flux over a requested aperture (30 arcsec radius default), and the FWHM etc.
  3. It then uses the above to calculate three FCF terms described below.
FCF (arcsec)

FCF(arcsec) = Total known flux (Jy) / [Measured integrated flux (pW) * (pixsize2)]

which will produce an FCF in Jy/arcsec2/pW.

This FCF(arcsec) is the number to multiply your map by when you wish to use the calibrated map to do aperture photometry.  

FCF(beam)  

FCF(beam) = Peak flux (Jy/beam) / [Measured peak flux (pW)] 

producing an FCF in units of Jy/beam/pW.

The Measured peak flux here is derived from the Gaussian fit applied by beamfit. The peak value is susceptible to pointing and focus errors, and we have found this number to be somewhat unreliable, particularly at 450μm. This FCF(beam) is the number to multiply your map by when you wish to measure absolute peak fluxes of discrete sources.

To overcome the problems encountered as a result of the peak errors, a third FCF method has been derived, where the FCF(arcsec) is taken and modeled with a Gaussian beam with a FWHM equivalent to that of the JCMT beam at each wavelength. The resulting FCF calculates a 'equivalent peak' FCF from the integrated value assuming that the point source is a perfect Gaussian.

FCF (beamequiv)  

FCF(beamequiv) = Total flux (Jy) x 1.133 x FWHM2 / [Measured integrated flux (pW) * pixsize2] 

 or more conveniently:  

FCF(beamequiv) = FCF(arcsec) x 1.133 x FWHM2 

where FWHM is 7.5'' and 14.0'' for the 450μm and 850μm respectively. This produces an FCF in units of Jy/beam/pW. 

This FCF(beamequiv) and FCF(beam) should agree with each other, however this is often not the case when the source is distorted for the reasons mentioned above. FCF(beamequiv) has been found to provide more consistent results and it is advised to use this value when possible, in the same way as FCF(beam). 

Methodology for calibrating your data: 

So you have a reduced map for a given date. Each night of S2SRO should have at least one if not more calibrator observations that were taken during the night. A website with this list is currently in the works and I'll add it to the blog when it is completed. For now it is relatively easy to search the night's observations for these. Primary calibrators were Uranus and Mars, and the  secondary calibrators are listed on the SCUBA secondary calibrators page. In addition to these, Arp 220, V883 Ori, Alpha Ori and TW Hydrae were tested as calibrators. Their flux properties were investigated with SCUBA (see SCUBA potential calibrators) .

  1. Run your selected calibration observation through the mapmaker using the same dimmconfig as your science data used.
  2. Use PICARD's recipe SCUBA2_FCFNEFD on your reduced calibration observation. This will produce information to the screen and a logfile log.fcfnefd with the three FCFs as mentioned above, and an NEFD for the observation. PICARD by default uses fixed FCF's to calculate the NEFD. (450um: 250 and 850um: 750). If you wish to get an NEFD using the FCF calculated for the calibrator add USEFCF=1 to your parameter file. 
  3. Take your selected FCF and multiply your map by it using KAPPA cmult.


Things become slightly more complicated if you wish to use PICARD's matched filter recipe to enhance faint point sources. Again see Andy's PICARD page (link above) for details on the matched filter recipe. If you are normalising the matched filter peak, you will need to run this filter over your calibrator image with the same parameters you used for your science map.

Note: You cannot, at this point, use the FCF(beamequiv) number to calibrate your match-filtered data. This number will now be (usually) disproportionate and just wrong. The FCF(beam) value however, should be preserved by this method.  

The peak is truly preserved by this method so two numbers, the FCF(beamequiv) pre-match-filter and the FCF(beam) post-match filter should be close to the same and either of these values can be used to calibrate your match-filtered science map. 

It is also worth noting (though perhaps obvious) that after running the match-filter script in peak-normalisation mode, only the peak flux values (and not the integrated sum over an aperture) will be correct. The reverse is true if using sum-normalisation.
 

2010-03-17

Flatfielding updates

Summary: Flatfield ramps work really well and SMURF can now automatically handle them in the map-maker.

SCUBA-2 bolometers need to be calibrated to understand how they respond to varying signal coming from the sky and astronomical object. The original plan was to calibrate in the dark (shutter closed). The sequence goes something like:

  1. Select a reference heater value, take a dark frame
  2. Choose a new heater setting, take a dark frame
  3. Take a dark frame at the reference heater value
  4. Choose a different heater setting, take a dark frame
  5. Take a dark frame at the reference heater value
and continue until you have covered a reasonable range of heater settings. As the heater is changed the bolometers read out a different current. Any drifts in the instrument are compensated by averaging the surrounding reference frames and subtracting. This means that you end up with a curve that goes through zero power at the reference heater value. In order to convert this to a flatfield you either fit a polynomial as a function of measured current (so that you can look up the power) or else use "TABLE" mode and do a linear interpolation between measurements either side of the measured current. The gradient of the curve (how the bolometer responds to changes in power) is the "responsivity" and is measured in amps per watt. The responsivity image can be calculated using the SMURF calcflat command.

When you open the shutter the idea is that you "heater track" to the sky. This involves you adjusting the power to the heater such that the sky power detected by the bolometer results in the same current being measured by the bolometer as it measured in the dark. We do this by looking at the signal from a set of tracking bolometers and assume that those bolometers are representative of the others on the array. In reality what happens is that about 80% of the bolometers do more or less read the same signal before and after opening the shutter but the other 20% are in a completely different place. This would not be a problem if the responsivity didn't change for those 20% but unfortunately it does. We have verified this by doing finely spaced pong maps on Mars covering a 6x6 arcmin area. This takes about 15 minutes but gives us a beam map of every single bolometer. Analysing the Mars images showed that the bolometers with the lowest responsivity also measure a very low integrated flux for Mars and so the calibration does change when the shutter is opened.

The solution for this was to change flatfielding to work on the sky rather than in the dark. This works in just the same way as previously, using reference sky measurements to compensate for drift, and the top plot in the figure shows a sky flatfield that is working pretty much perfectly. Finely-spaced maps of Mars confirm that all the bolometers are calibrated to within 10% with no drop off for the low responsivity bolometers.

At this point things were looking good but we still had the issue that the sky flat takes a few minutes and really has to be done every time you do a new setup and probably at least once an hour. They also are very dependent on observing conditions as could be seen on 20100310 and a few days before hand where the sky was terribly unstable despite brilliantly low opacity (0.03 CSO tau). The middle plot below shows a sky flat on 20100310 and it is immediately obvious that the sky is varying very fast and varying the power over a much larger range than the heater is adjusting for. This flatfield failed to calibrate any bolometers at all and we had to resort to dark flatfields to get a baseline calibration (with the associated worries described above).

We had known this was going to be an issue so in the early part of the year we had been modifying the acquisition system to do fast flatfield ramps. Rather than setting the heater, doing an observation, changing the heater, doing an observation we can now change the heater value at 200 Hz (currently we take 3 measurements at each setting though). On 20100223 we enabled sky flat field ramps at the start and end of every single mapping observation and a few days later we added it to focus, pointings and sky noise observations. The bottom plot shows the flatfield ramp for the observation that immediately followed the discrete sky flatfield shown in the middle plot. There is an issue with the very last ramp but the flatfielding software in SMURF had no problem calculating a flatfield for 850 bolometers (SMURF does compensate for drift in the reference heater values). The flatfield ramps are going to help enormously with calibration.

Actually using these flatfields in the map-maker took some work but yesterday I committed changes to SMURF so that flatfield ramps will be calculated and used when flatfielding data in the map-maker (and other SMURF commands). All you need to do is give all the files from an observation to SMURF and it will sort everything out.

I have updated the /stardev and /star rsync server in Hilo (64-bit and 32-bit). There is also a new nightly build available for OSX Snow Leopard 64bit in the usual place.

One final caveat, we have not yet calibrated the resistance of each bolometer relative to the nominal 2 ohms. We have taken data by looking at a blackbody source which should give us a way of tweaking the resistances. When this happens the flatfielding will change slightly and maps will need to be remade (although how critical that is will depend on how much we tweak the bolometers).