Showing posts with label FCFs. Show all posts
Showing posts with label FCFs. Show all posts

2014-01-16

Updates to SCUBA2_CHECK_CAL: 2-component fitting

Happy New Year to everyone reading this (or Hau‛oli Makahiki Hou as we say in Hawai‛i), and welcome to 2014.

The PICARD recipe SCUBA2_CHECK_CAL has been updated to use a two-component gaussian profile to fit the beam when determining an FCF from a calibration observation, as long as the signal-to-noise ratio exceeds 100. Previously, a single-component fit was used which was not constrained to be a gaussian. (If the signal-to-noise ratio is not high enough it will fall back on the old behavior.) This two-component fit is based on the FWHM and relative amplitudes of the two components derived in Dempsey et al. (2013).

This change has no effect on the ARCSEC FCFs, but results in a small (~1%) but consistent reduction in the BEAM FCFs. The effect should be small enough to be negligible, and with this change we now have slightly higher confidence in the resulting FCFs than with the old one-component fits. BEAMMATCH FCFs are likewise unaffected by the change, as testing showed they were best fit with profiles not forced to be gaussian.

You can set the behavior of SCUB2_CHECK_CAL manually using the FIT_GAUSSIAN parameter in your config file. The default value of 2 uses a two-component gaussian fit (when S/N > 100), a value of 1 uses a one-component gaussian fit, and a value of 0 recovers the old behavior of a one-component fit not constrained to be a gaussian.

At the moment, to use this feature you will need to update your Starlink to the current development version. Linux users can simply rsync Starlink from JAC following the instructions at http://starlink.jach.hawaii.edu/starlink/rsyncStarlink

2013-07-18

Inclusion of CSO Tau Fits for Determining FCFs

During the first few months of 2013 the WVM at JCMT had several periods of instability where it was unreliable for determining the value of tau. Because of this, members of the science team collaborated to produce a collection of smooth polynomial fits to the tau values from the 225-GHz tau meter at the CSO for the affected nights, which can be used to perform extinction correction in place of the WVM tau values.

In the latest version of starlink (which you can get by rsyncing from /stardev at the JAC) makemap will now first check to see if the date of the observation is one of the (somewhat sporadic) dates when the WVM was unstable. This occurs as long as the ext.tausrc parameter is set to "auto," which it is by default. If the date is one of the affected dates, makemap will look for an available fit to the CSO data in the file specified by the parameter ext.csofit, which is set up by default to refer to the collection of CSO fits produced at the JAC. If makemap cannot find a fit for an observation in the specified path it will print a warning that no fit or WVM data is available and refuse to reduce the observation, though this shouldn't ever happen in normal operation.

If you have observations taken between January 19 and May 14 of this year, using the latest version of starlink rsynced from /stardev at the JAC will help ensure that you get the best extinction data available.

The graph below shows a comparison between the FCFs derived from WVM data and those derived using the new CSO tau fits over the period of January-May 2013. The blue diamonds represent FCFs from the WVM, and the tan circles are FCFs from the CSO tau fits.


2012-08-16

Updates to FCFs and extinction correction

Over the past couple months the flux conversion factors and extinction corrections have been updated and we are now very happy with the answers. The main issue from the user perspective is that the FCF you apply critically depends on which version of SMURF was used to generate the map. At the time of writing this data products downloaded from CADC use an older extinction correction than the value you will find in place for the current SMURF. We plan to update CADC processing shortly but reprocessing the historical archive will take some time.

We have set up a web page at JAC listing the parameters that should be used and instructions on how to determine which version of the software was used to generate your map:

http://www.jach.hawaii.edu/JCMT/continuum/scuba2/scuba2_relations.html

2012-02-03

SCUBA-2 Calibration: REDUX.

The short story:

  • The heater coupling factors have been adjusted to more realistic values. In practice, this does not change the performance of the instrument - however it does change the absolute value of the FCFs. These values were adjusted in the software in mid-December.
  • The WVM tau algorithm has been fixed and improved. This will not affect you directly: though the nightly plots now look extremely good and are officially used for weather band determination.
  • This has allowed new, and better calculation of the relation between the 225GHz tau derived from the WVM and the opacities at the two SCUBA-2 filter-bands. They are now as follows:
TAU_[850] = 4.6 * (TAU_[225] - 0.0043)
TAU_[450] = 26.0 * (TAU_[225] - 0.019)
  • The FCFs (flux conversion factors) have been derived for both wavelengths from an extensive reduction of calibrator sources observed over eight months of SCUBA-2 commissioning and science verification observations. They are as follows:
850um:
FCF_[arcsec] = 2.42 +/- 0.15 Jy/pW/arcsec**2
FCF_[peak] = 556 +/- 45 Jy/pW/beam
Beam area = 229 arcsec**2

450um:
FCF_[arcsec] = 6.06 +/- 0.32 Jy/pW/arcsec**2
FCF_[peak] = 606 +/- 55 Jy/pW/beam
Beam area = 97 arcsec**2


  • Reminder on how to calibrate your data:

Other posts discuss how best to reduce your data (and what recipes are needed). The latest software releases (since January 2012) all include extinction correction (with the relations above) and the changed coupling factors. If you reduced your data prior to this, you will need to reduce them again to account for these changes. Applying the FCFs reported here to old reductions of your data will be wrong.
  • The arcsec FCF: (when you want integrated fluxes)

The arcsec FCF is the factor by which you should multiply your map if you wish to use the calibrated map to do aperture photometry.

  • The peak FCF: (the FCF-formerly-known-as-beam):

This FCF is the number by which to multiply your map when you wish to measure absolute peak fluxes of discrete sources.


The whys and the wherefores (the gory details):

  • Reductions of the calibration observations:

Uranus and Mars were used as the primary calibrators for these results. In addition, CRL 618, CRL 2688 were also predominant secondary calibrators. All of the calibrators were reduced in January 2012 using the updated dimmconfig_bright_compact.lis. The improvements in the recipe (mostly in how it chooses to stop iterating) have resulted in extremely flat maps with nearly no evidence of the 'bowling' seen around strong sources during S2SRO reductions. To improve the accuracy of the peak-fitting and aperture photometry, the maps were reduced with 1 arcsecond pixels at both wavelengths.

Once the maps were reduced they were then analysed using the PICARD script SCUBA2_FCFNEFD. There have been some changes to this script: some few bugs have been fixed, the average FCF's have been adjusted, as well as the (now emprically derived) beam area, and a few reference fluxes have been adjusted. FCF_beamequiv has been removed entirely, and all calculations of integrated values are now done using AUTOPHOTOM, with a defined aperture and annulus for background subtraction.

Following extensive analysis of the most optimal parameters, all calibrations were reduced using a 60" diameter aperture (at both wavelengths) with an annulus between 90" and 120" from the source position.


  • Questions? Let's provide answers to a few we've already seen:


  • "These FCFs are very different to the old numbers quoted!"

Yes they are. The heater coupling factor change and the new tau relations play a significant part in this. But in addition, the large sample of observations has allowed for a much more accurate determination of the beam area. At 450um in particular, optical effects of the telescope show that the error beam is large and the beam is not gaussian. This results in an effective FWHM that is much broader than the 7.5" quoted previously (though that is the approximate FWHM of the fit to the centre of the beam) - it is more like 9.5", taking into account the error beam. Therefore, the measured (and fitted) peak is relatively lower, requiring a higher FCF to calibrate the peak flux in your data.


  • "There is a lot of scatter when I calculate the 'beam' or peak FCFs for my calibrators (particularly at 450um)"

No kidding. Peak values are obtained either from reading off the peak of the map (in gaia or by another method) or by fitting to the peak using beamfit (as is done in PICARD). The beam shape (particularly at 450um) can be extremely susceptible to changes in focus and atmospheric instability, amongst other things. The integrated value (FCF_arcsec) is more robust against such changes. If you are measuring a peak fit from a calibrator and see a strong deviation from the expected value things to check are:

- how 'focussed' does the image look? If you see distortion in the shape of a source that should be point-like, or distortion or 'shoulders' in the beam then it is likely that the peak value will be unreliable.

- was the observation taken early in the evening? Focus and atmospheric effects are known to be worst in the early evening hours and sometimes in the morning after sunrise. If you are looking at calibrators, try and look at ones taken later in the night and see if there is improvement.

A 'trap' has been set in PICARD to warn you if the attempted fit to the peak misses the actual peak value by more than 10%. Looking at the fit to the shape also helps in this instance. In any case, the quoted peak FCF value at the top of the post is derived from the arcsec FCF and the empirical beam area derived from nearly 500 observations at both wavelengths and this number has been shown to be robust.


  • "How stable are these FCFs? (read: do I need to reduce my own calibrators?)"

Very. The absolute errors at both wavelengths are within 5% and no significant trends have been seen in the last six months. Instrument performance is being monitored very closely and any deviations are likely to be noted specifically. However, we do not discourage you taking calibrators from the nights your data was taken and reducing them yourself - we appreciate the sanity checks! Another handy rule: if you do it to your data, do it to your calibrator. If you have specific methods you plan to use on your data, apply the same methods to your calibrator in order to ensure your calibration is correct. We are now happy to say though, that these FCFs look stable and correct, so using these numbers should provide you with well-calibrated data.


  • "What happened to FCF_beamequiv?"

FCF_beamequiv is a seductive, evil little value that tempted us to stray to the dark side, albeit temporarily. In essence it was created to use as a comparison to SCUBA performance, but should never have been used to actively calibrate SCUBA-2 data as it assumed a perfect gaussian beam. The statements above explain that this is patently untrue, especially at 450um. The beamequiv number was quoted previously, and incorrectly, as the true FCF, and it is largely the reason that the new (and correct) numbers seem so much larger. We have banished it from PICARD and it shall now be known as the FCF-that-shall-not-be-named.