Showing posts with label starlink. Show all posts
Showing posts with label starlink. Show all posts

2014-07-25

A new Starlink release contains notable updates to the SCUBA-2 configuration files.


The latest Starlink release - 2014A - has been made public. For details please read the release notes provided at: http://starlink.jach.hawaii.edu/starlink/2014A

As part of this new release we want to highlight one significant update and a couple of new additions to the SCUBA-2 reduction arsenal of config files.

Updates to bright_extended config file

This config file 'dimmconfig_bright_extended.lis' has always been intended for reducing data containing bright extended sources. It has remained untouched for a couple of years now despite advances in our understanding of SCUBA-2 reduction of bright regions. The config file now contains the following parameters/links:

   ^$STARLINK_DIR/share/smurf/dimmconfig.lis

   numiter=-40
   flt.filt_edge_largescale=480
   ast.zero_snr = 3
   ast.zero_snrlo = 2

   ast.skip = 5
   flt.zero_snr = 5
   flt.zero_snrlo = 3


In previous Starlink releases i.e. Hikianalia, the bright_extended configuration file only contained the following:

   numiter = -40
   ast.zero_snr = 5

   flt.filt_edge_largescale = 600


New 'FIX' config file

Two new config parameter files have been added These are intended to be used with one of the existing dimmconfig files. They provide new values for selected parameters aimed at solving a particular problem ("blobs" in the final map, or very slow convergence).

  • dimmconfig_fix_blobs.lis 
    • These parameters attempt to prevent smooth bright blobs of emission appearing in the final map. It does this by 1) identifying and flagging samples that appear to suffer from ringing a soft-edged Butterworth filter in place of the normal hard-edged filter, and 3) rejecting samples for which the separate sub-arrays see a markedly different common-mode signal.
  • dimmconfig_fix_convergence.lis
    • The parameters defined by this file attempt to aid the convergence process, and should be used for maps that will not converge within a reasonable number of iterations.

2014-01-16

Updates to SCUBA2_CHECK_CAL: 2-component fitting

Happy New Year to everyone reading this (or Hau‛oli Makahiki Hou as we say in Hawai‛i), and welcome to 2014.

The PICARD recipe SCUBA2_CHECK_CAL has been updated to use a two-component gaussian profile to fit the beam when determining an FCF from a calibration observation, as long as the signal-to-noise ratio exceeds 100. Previously, a single-component fit was used which was not constrained to be a gaussian. (If the signal-to-noise ratio is not high enough it will fall back on the old behavior.) This two-component fit is based on the FWHM and relative amplitudes of the two components derived in Dempsey et al. (2013).

This change has no effect on the ARCSEC FCFs, but results in a small (~1%) but consistent reduction in the BEAM FCFs. The effect should be small enough to be negligible, and with this change we now have slightly higher confidence in the resulting FCFs than with the old one-component fits. BEAMMATCH FCFs are likewise unaffected by the change, as testing showed they were best fit with profiles not forced to be gaussian.

You can set the behavior of SCUB2_CHECK_CAL manually using the FIT_GAUSSIAN parameter in your config file. The default value of 2 uses a two-component gaussian fit (when S/N > 100), a value of 1 uses a one-component gaussian fit, and a value of 0 recovers the old behavior of a one-component fit not constrained to be a gaussian.

At the moment, to use this feature you will need to update your Starlink to the current development version. Linux users can simply rsync Starlink from JAC following the instructions at http://starlink.jach.hawaii.edu/starlink/rsyncStarlink

2012-09-12

FIT1D: A new SMURF command for ACSIS data

More time ago than I am willing to admit, I started coding a Starlink routine to fit spectral lines in ACSIS cubes. Got a long way until SCUBA-2 commissioning and calibration put a halt to it, but I have finally managed to finish the program, technically a beta-version, as part of the upcoming Kapuahi release of Starlink.

Usage:
% fit1d  in  out  rms  [config]  [userval]  [pardir]  [parndf]  [parcomp]

What distinguishes FIT1D from other profile fitting routines is that it specifically attempts to deal with two issues: non-gaussian profile shapes and the fact that ACSIS data-cubes have many, potentially very different, profiles to fit. Regarding the latter, there are many fitting routines that produce fitted profiles for data-cubes, but FIT1D also produces cubes with the fitted parameters themselves and can use such files as input to give control over the fit of, in principle, each individual profile in the cube. Thus it is e.g. possible to fit broad lines on the nucleus of a galaxy and narrow lines everywhere else. More about that below.

A section has been added to the SMURF documentation in SUN/258 about FIT1D, which I will try to summarize here. FIT1D is generic in that it can fit profiles along any axis of an up to 7-dim hyper-cube, but will be discussed here in the context of a default RA-Dec-Vel ACSIS cube. Note that the routine assumes that data have been baseline-subtracted, using e.g. MFITTREND, i.e. that the profiles have a zero-level at 0. 


Gauss-Hermite shapes as a function of the 3rd-order
      skewness coefficient 'h3' and the 4th-order the kurtosis (peakiness)
      coefficient 'h4'. The red box indicates the limits on acceptable
      values for h3 and h4 as defined in the defaults configuration file. Note
      that the fitted profile by default is restricted to positive values
      and will omit the shown negative features.

Non-gaussian Profiles.

FIT1D essentially re-implements the fitting code for non-gaussian profiles from the GIPSY package (Kapteyn Institute, Groningen, The Netherlands). Function types that can be fitted are Gaussian, Gauss-Hermite, and Voigt profiles. In particular, Gauss-Hermite functions are a powerful extention when fitting profiles that are skewed, peaky, or only approximately gaussian. The figure above shows Gauss-Hermite profiles as a function of the skewness coefficient h3 and the kurtosis (peakiness) coefficient h4. See  for SUN/258 further details, but note that the default setting in the configuration file is for FIT1D to suppress the negative features in the fitted profiles and to leave only the positive part of Gauss-Hermites.

Because of their ability to fit distorted shapes, Gauss-Hermites are particularly well suited to "capture" the maximum amount of emission from a cube. The fits can be remarkably accurate as is shown in the the figure below showing a 3-component fit (i.e. up to 3 spectral lines) using gausshermite2 functions (i.e. fitting both h3 and h4). Collapsing the resulting cube with fitted profiles can thus result in an accurate and almost noise-free white-light or total-emission map.
Fit1d - Black: original profiles; Red: results of a
    3-component Gauss-Hermite2 fit (fitting both h3 and h4)


FIT1D derives its ability to fit a complex line-shape both from the Gauss-Hermite function but also from that it can fit multiple (sub) components to get the best match possible. However, that can make the interpretation of the fits in terms of the physical characteristics and quantities difficult, hence for those you may also want to make a fit of the line-shape using a single standard Gaussian function. 

Component Parameter files

Besides a data-cube with the fitted profiles FIT1D also outputs so-called Component parameter files as NDF extensions in the header of the output file. These can also be copied out as independent data-cubes. There is a file for each component (i.e. line) that was fitted along the profile up to the number of components requested by the user. Each plane of a Component parameter file has an image of the value of a fitted parameter across the field-of-view. For instance, the one resulting from a gaussian fit has images respectively showing the fitted Amplitude, Position (velocity), and FWHM as well as a plane with an id-number of the function used.

Much of the (anticipated) use of FIT1D derives from the fact that Component parameter files can be used as input as well: either to provide initial estimates or fixed values to the fitting routine.  The difference between values specified in the Component parameter files
and ones declared in a User parameter values file is that the former can vary across the field-of-view whereas the latter will result in the same value being used for all profiles. E.g. for use with spectral-line surveys the User parameter values file can be used to provide initial estimates of the frequencies or velocities at which lines are expected or to fix fits at those frequencies.

By manipulating Component parameter files e.g. resulting from an initial fit, the user can customize or correct subsequent fits. In extrema, a Component parameter file could be made from scratch based on a model and be used to create a spectral-line data-cube with that model (config option: model_only=1) or be used as initial estimates for a fit. Of more practical use, Component parameter files can be used to correct problems associated with a fit since the art of fitting is not in the fitting algorithm, but in providing accurate initial estimates. For instance, the left image below shows a section of an Amplitude plane of a fit where there are problems in a few locations. Setting these location to bad values and using FILLBAD to interpolate over them, the corrected Component parameter file was used as initial estimate for a subsequent fit, resulting in the image on the right

Fit1d - Left: Section of a parameter file showing
      originally fitted amplitudes; Right: Amplitudes after using a
      corrected parameter file from the original fit as initial estimates
      for a subsequent fit.

More creative options are possible: after an initial fit with a gaussian, the function id can be changed to a gausshermite1 in part of the field and the resulting file used as initial estimates for a subsequent fit to account for skewed profiles there. Similarly, the initial guess of the FWHM can be made wide on e.g. nucleus of a galaxy while leaving it more narrow outside. As another example, the fit of multiple components can be limited to only part of the field by setting the parameter file for the second and higher components to bad values outside the relevant region (multiple component parameter files can be used as input: one for each component to be fitted).

In conclusion: please remember that this is a beta-release and that you may run into unanticipated issues. Also chosen limits in the configuration file may need tweaking. If an initial fit looks poor, try adjusting minamp (in units of rms!) or, in particular, minfwhm (in units of pixels!) in the configuration file (see: $SMURF_DIR/smurf_fit1d.def). Also use range to limit the fit to a relevant region of the spectrum.

The released implementation of FIT1D can fit up to 7 components per profile per run, but the output of multiple runs each covering a range in velocities or frequencies can be combined. The fit itself is fully multi-threaded and will be much faster on a modern multi-core computer: a 3-component gausshermite2 fit of 1.1 million spectra (a 2 Gb input file) took 15 minutes on a dual-core, 16 Gb memory machine versus 4 minutes on one with 12 cores and 75 Gb of memory.

Happy fitting!

Remo



2012-06-17

Displaying an outline of the mask used to create a map

The smurf:makemap command allows a mask to be specified that defines background areas on the sky. These background areas are forced to zero on all but the last iteration in order to suppress spurious large scale structures. The user may supply an external NDF to be used as the mask, or alternatively makemap can be left to generate its own mask based (for instance) on the evolving SNR at each pixel. Either way, it is often useful to visualise the mask that was used to create a given map.This can be done using KAPPA commands as follows (assuming the map created by makemap is in file "map.sdf"):

% kappa
% lutgrey
% display map
% contour clear=no mode=free heights=0.5 comp=quality map style='colour=red'


This will display the final map as a greyscale image with the mask outlined in red. Other properites of the mask outline (colour, line thickness, etc) can be specified by including other options in the "style" parameter value when running contour.


2010-05-17

PICARD web page

I've created a new static location for all things PICARD:
http://www.oracdr.org/oracdr/PICARD

The page includes an introduction to PICARD, a list of all available recipes (with more detailed documentation) and a few hints, tips and potential gotchas. I'll keep this page up-to-date, adding new blog entries and/or sending messages to the scuba2dr mailing list as necessary.

2010-05-11

Post-processing SCUBA-2 data with PICARD

Processing raw SCUBA-2 data is done with SMURF or the pipeline (ORAC-DR). What happens after that depends on the user and their level of familiarity with particular software packages. Fortunately the SCUBA-2 software team is here to help out and come up with a series of standardized tools for performing a number of basic post-processing tasks.

Introduction to PICARD

Our tool of choice is PICARD which makes use of the existing ORAC-DR infrastructure as well as our existing knowledge of writing primitives and recipes for the SCUBA-2 pipeline. PICARD is run from the command line as follows (I'll use % as the prompt):

% picard [options] [RECIPE_NAME] [list of files to process]

For example,

% picard -log sf -recpars mypar.lis CROP_JCMT_IMAGES myfiles*.sdf

The most commonly used options are -log and -recpars (the full list of available options can be seen by running picard -h). Both of these options take additional arguments.

The "-log" option controls where the messages from PICARD are printed: "-log sf" will write messages to the terminal window and to a file called .picard_PID.log (where PID is the process ID for picard) in the output directory. To avoid creating the .picard_PID.log files, just specify "-log s".

The "-recpars" option allows the user to pass in a text file containing parameters which can be used in the given recipe. The permitted parameters are listed with the various recipes below. The format of this text file is a list of `parameter = value' entries, with the recipe name given in square brackets:

[RECIPE_NAME]
PARAM1 = VALUE1
PARAM2 = VALUE2

PICARD writes its output files to the current directory (unless the environment variable ORAC_DATA_OUT is defined in which case that location will be used).

There are currently four recipes which may be of interest:
  • MOSAIC_JCMT_IMAGES
  • CROP_JCMT_IMAGES
  • REMOVE_BACKGROUND
  • SCUBA2_MATCHED_FILTER
These recipes and their parameters are described in more detail below. More recipes will be added as the need arises and as we gain more experience in analyzing SCUBA-2 data. Interested users should update their Starlink installations to get access to these recipes.

MOSAIC_JCMT_IMAGES

Coadd the given files into a single map, taking into account the EXP_TIME and WEIGHTS NDF components. The images are combined using variance weighting and the output variance is derived from the input variances. Currently the recipe uses the KAPPA wcsmosaic task for coadding the images.

The same pixel-spreading method (and any associated parameters) is used for the data and the EXP_TIME and WEIGHTS.

Creates a single output file based on the name of the last file in the list, and with a suffix "_mos" (e.g. mylastfile_mos.sdf).

Available recipe parameters:
[MOSAIC_JCMT_IMAGES]
WCSMOSAIC_METHOD = wcsmosaic pixel-spreading method: see wcsmosaic documentation for available options (default is "nearest")
WCSMOSAIC_PARAMS = additional parameters which may be required for the chosen method

CROP_JCMT_IMAGES

Crop images to the map size in the data header (as specified in the Observing Tool), though this size can be overridden using the recipe parameters below.

Creates an output file for each input file, with the suffix "_crop".

Available recipe parameters:
[CROP_JCMT_IMAGES]
MAP_WIDTH = map width in arcsec
MAP_HEIGHT = map height in arcsec

REMOVE_BACKGROUND

Fit and remove large-scale background variations from images using either KAPPA fitsurface or CUPID findback. See the Starlink documentation on both tasks for more information on the parameters shown below. The option exists to mask out a circular region centred on the source before removing the background.

Be aware that the subtraction of the background will add noise proportional to the RMS deviation between the image and the background fit.

Creates an output file for each input file with the suffix "_back".

Available recipe parameters:
[REMOVE_BACKGROUND]
MASK_SOURCE = flag to mask out a circular region on the source before fitting a background (1 = mask out source; 0 = do not mask out source - the default)
APERTURE_RADIUS = radius of aperture (in arcsec) for masking out source (otherwise 30 arcsec)
BACKGROUND_FITMETHOD = the method for fitting the background, either fitsurface (default) or findback
FITSURFACE_FITTYPE = fittype parameter for fitsurface: polynomial (default) or spline
FITSURFACE_FITPAR = parameters for fit, up to 2 numbers corresponding to NXPAR/NYPAR for fitsurface or KNOTS for spline fit
FINDBACK_BOX = size of box in pixels used by findback for smoothing the image

SCUBA2_MATCHED_FILTER

Apply a matched filter to the data to improve point-source detectability. The images and PSFs are smoothed with a broad Gaussian (default is 30 arcsec but can be varied using the recipe parameter below) and subtracted from the originals. The images are convolved with the modified PSFs. The PSF created by the recipe is a Gaussian with FWHM equal to the JCMT beamsize at the appropriate wavelength (i.e. 7.5 or 14 arcsec).

Creates an output file for each input file with the suffix "_mf", and a PSF file "_psf" if the PSF was not specified as a recipe parameter.

Available recipe parameters:
[SCUBA2_MATCHED_FILTER]
PSF_MATCHFILTER = name of a PSF file (NDF format, will be used for all images)
PSF_NORM = switch to determine whether a PSF is normalized to a peak of unity ("peak" - the default) or a sum of unity ("sum")
SMOOTH_FWHM = FWHM in arcsec of Gaussian to smooth image (and PSF)

2010-04-29

How Should I Mosaic My SCUBA-2 Data (Redux)?

Mosaicking your SCUBA-2 with wcsmosaic or makemos is fine if all you're interested in is the data. But what if you want to know the exposure time per pixel in your map to determine if the noise is reasonable? That's easy for each map written by makemap: the exposure time image is stored in the .MORE.SMURF.EXP_TIME NDF extension (which you can view in Gaia). But wcsmosaic doesn't know about this extension, so the EXP_TIME data in a mosaic is only that of the first file.

There are two ways to get properly mosaicked data:
  • Use the ORAC-DR pipeline and process from raw;
  • Use the PICARD recipe MOSAIC_JCMT_IMAGES on processed data.

I'll write a post about using the pipeline in the near future but for now I'll highlight the second method.

PICARD comes for free with ORAC-DR (which comes for free with Starlink). It's basically a tool for processing and analyzing reduced data which takes advantage of the same infrastructure used by the pipeline.

At the command line type:

picard -log s MOSAIC_JCMT_IMAGES myfiles*.sdf

It'll tell you what it's doing and exit. This will create a file called "mylastfile_mos.sdf" which has the correct EXP_TIME extension (where "mylastfile" is the name of the last file in the list given to MOSAIC_JCMT_IMAGES). Try it and see.

Note that all the files must be of the same source.

At the moment, it only supports wcsmosaic as the mosaicking task but will support makemos in the future. Keep rsync'ing and one day it'll just be there...

[This post was updated on 20100510 to reflect the change in the recipe name and the file-naming behaviour]

How Should I Mosaic My SCUBA-2 Data?

I've been asked this a number of times so I thought I'd write something down. Currently the SMURF iterative map-maker calculates models for each chunk of data independently (a chunk is a continuous sequence so many times that will be a single observation but it might be smaller than an observation if a flatfield is inserted mid-way through). It then combines them using a weighted coadd to give you the final map. This means that there is nothing to be gained in throwing all your observations at MAKEMAP apart from it taking much longer to make a map before you can see the result.

In terms of flexibility it is better to make a map from each observation separately. You can then look at each map to see how it looks before combining it. You may also want to remove any low frequency structure at this point as well. To combine these maps into a single map you have two choices using Starlink software:

  1. Use KAPPA WCSMOSAIC with an interpolation or rebinning scheme of your choice. Make sure that you set the VARIANCE parameter to true to enable variance weighting of your mosaic.
  2. If you have already ensured that the maps are made on the same grid (you can use the REF argument in MAKEMAP to ensure this) then you can try CCDPACK MAKEMOS. This can be used to do median or clipped mean stacking and does no interpolation or smoothing. If your images are not aligned on the same grid it is probably better to remake them so that they are, but if that is difficult you can use the KAPPA WCSALIGN command to match up the grids first. Use the GENVAR and USEVAR parameters in MAKEMOS.
In the future it may be that the map-maker will be able to do a better job handling all the data at once itself instead of external mosaicking but at the moment this is not the case.

If you want to keep a track of exposure times you need to track that separately. We will cover that in a follow up post.

2010-02-26

Monitoring changes to the DR software

Yesterday I gave instructions on how to retrieve the newest version of the Starlink software but it may not always be clear to people what changes are being made to the software to decide whether you want to get an update. The source code repositories for the Starlink software and ORAC-DR have RSS feeds that you can monitor in your standard news feed reader (e.g. Google Reader). Click on the RSS icon in the URL bar for the Starlink repository and the ORAC-DR repository.

2010-01-20

Starlink release: Hawaiki (Deneb)

Hawaiki has shipped!

http://starlink.jach.hawaii.edu/starlink/Hawaiki

Checkout the first (it sure won't be the last) version of the SCUBA-2 data reduction cookbook:

http://starlink.jach.hawaii.edu/docs/sc19.htx/sc19.html

2009-09-04

ORAC-DR and Starlink on Twitter!

Much to Tim and Frossie's chagrin, I've created two Twitter accounts for ORAC-DR and Starlink. I haven't completely sorted out what will be done with them at this time, but for now I'll probably use them to disseminate short tips and tricks for using ORAC-DR and Starlink software.

Follow them at http://twitter.com/oracdr and http://twitter.com/starlinksoft!

2009-09-02

JLS DR telecon - 1st meeting

Attendance: A. Chrysostomou, R. Tilanus, T. Jenness, R. Plume, M. van der Wiel, J. Di Francesco, G. Fuller, B. Cavanagh, H. Thomas, D. Johnstone, H. Roberts, D. Nutter, J. Hatchell, F. Economou

- initial discussion on whether we will have a SCUBA-2 pipeline ready. There will be something in place for shared risks but basic. More development will have to wait until we have all arrays in place as it is not worth sinking any effort into this at this time.

- some people are having issues getting the pipeline installed and the fact that there is a lack of documentation. If people/institutes are having issues installing (any) Starlink software, then please inform the JAC (stardev@jach.hawaii.edu) providing the relevant details.

ACTION 1: JAC will provide information on how to rsync the starlink releases to get latest patches/fixes. Information will also include for which operating system these patches/fixes are available.

DONE(!): Instructions are available on the starlink web site (http://starlink.jach.hawaii.edu/).
To download the most recent release go to: http://starlink.jach.hawaii.edu/starlink/Releases
To keep up to date with the latest fixes and patches go to:
http://starlink.jach.hawaii.edu/starlink/rsyncStarlink


- GAF requested for more statistics to be made available from the QA. GAF will follow up with specific request to Brad (see Action 3 below)

- it was clarified that the summit pipeline (during normal night-time observing) only runs basic QA on calibrations. After the end of observing, all data taken that night is re-reduced by a “nightly pipeline” which executes the full QA and advanced processing. The reduced data products which result from this are shipped to CADC and can be downloaded with (or without) the raw data in the normal way.

ACTION 2a: JAC to make QA log available to observers/co-Is following nightly reduction via the OMP (as a downloadable file).
ACTION 2b: JAC to make a more compact and readable QA report format.

ACTION 3: For SLS to provide JAC (ie Brad) with list of statistics and requirements for their QA, and also what they want for their reduction recipes to do.

- JH raised some existing issues from the GBS: flatfielding (striping) of early HARP data; some bad baselines are not being picked up by QA; although not as prevalent as in older data, spikes are not trapped by the QA; an investigation is needed on how the gridding should best be done

+++ the flatfielding problem is on Brad’s worklist

+++ we need more feedback from the teams on which bad baselines are not been filtered out

+++ de-spiking data is not a problem that JAC has been able to tackle as yet. Part of the issue is that these do not seem to be as prevalent in data any more and observers (PI as well as JLS) are not reporting the issue any longer. GAF reported that spikes are still present but at a small level, which is an issue for SLS who are looking for weak, narrow lines.

ACTION 4: For JLS teams to provide JAC with images/data/log of spikes when they come across them in their data.

- RPT raised a few issues from the NGLS:

+++ need ability to baseline fit both wide and narrow lines in same data set

+++ need ability to restrict e.g. moments analysis to known velocity range.

+++ QA generally fails for (at least) early NGLS data. Will need to investigate this more but need an easy means to switch off in recipes. This is easy in the main recipe, but less so in the advanced interative part.

- there is a blog available for data reduction and pipeline activities (you’re problably looking at it right now!): http://pipelinesandarchives.blogspot.com/

- the issue of making the pipeline more controllable through a config file to set parameters was discussed. TJ announced that he is developing infrastructure so that the pipeline can be parameterised

- ACC received several emails prior to the meeting. A common theme was the lack of documentation explaining what the pipeline does to data, and how to use the pipeline. JH repeated this concern at the meeting.

ACTION 5: ACC took an action following the close of the meeting to organise the production of pipeline documentation. These will probably take the form of a detailed account of what the pipeline does, and a separate cookbook which explains how to run the pipeline with the different options available.

ACTION 6: ACC to poll for a date and time for next telecon and make these meeting notes available.

2009-07-27

Starlink Software Collection - Nanahope (Pollux) version released

The Nanahope version of the Starlink Software Collection was just released and can be downloaded from here. Highlights include:

  • GAIA can now visualise 2-D and 3-D clumps created by the CUPID package.
  • GAIA now has full support for the Virtual Observatory and has been modified to support the SAMP protocol to enable it to communicate with other VO tools.
  • Automated provenance propagation can now track HISTORY information in addition to provenance. The PROVSHOW command can now list the history of all of the parents in the processing history. HISLIST (and NDF history propagation) has not been changed and still only examines the history of a single path through the processing.
  • The software can now be built with gfortran 4.4.

2009-07-09

Hierarchical history for NDFs

The recording of processing history has been part of the NDF library for many years. When an application uses one or more input NDFs to create an output NDF, the NDF library creates a record of the application and its parameter values, and stores this record in the output NDF. It also copies all the history information from the "primary" (usually the first) input NDF into the output NDF.

Whilst it was recognised at the time that it would be nice to copy history from all input NDFs, the exponential growth of history information this could cause was seen to be prohibitive. But 16 years is a long time and we typically now have far greater computing resources. So we've taken the plunge and changed things so that history from all input NDFs is copied into the output NDF. However, to preserve backward compatibility, the new facilities are provided by the provenance routines in the NDG library - the NDF library itself remains unchanged.

This means that applications such as KAPPA:HISLIST, etc, that use the NDF library directly to manipulate history information are unchanged. Instead, the extended history information is stored in the PROVENANCE extension of each NDF, and can be examined using the KAPPA PROVSHOW command. Since there can be quite a lot of history information, it is not shown by default - set the new HISTORY parameter to "YES" when running PROVSHOW to change this default behaviour. Needless to say, NDFs created before these changes were made will not contain any extended history.

A common use for this extended history will be finding the value used for a particular parameter when a selected ancestor was created. We're toying with the idea of a GUI that would make this sort of thing easier by allowing an NDF's "family tree" to be navigated and searched, but for the moment the best thing is probably to use grep on the output of PROVSHOW.

2009-07-08

GAIA goes all clumpy

In the next release GAIA will display CUPID catalogues and masks so you can inspect your clumps in all their detail. This all works in 2 and 3D, which you can see in more detail at on the GAIA support site.

2008-02-04

Processing 3D cubes with FFCLEAN

I've just modified kappa:ffclean so that it can:
1) process 3D cubes. It will do this either by processing the cubes as a set of independent 1D spectra, or as a set of independent 2D images (see new parameter AXES)
2) store the calculated noise level in the output variance array (see new parameter GENVAR)

This was motivated by my experiments with the new smurf:unmakecube command as a means of getting an estimate of the noise level in each residual spectrum.