2012-12-04
A significant bug using multi-chunked data
Change to way config files work
450.flt.filt_edge_largescale=600
flt.filt_edge_largescale=500
then previously the value 600 would have been adopted when processing 450 um data. With the new changes, the value 500 will be adopted instead. That is, unqualified values now take priority over qualified values.
This will be most relevant if your config file inherits from another config file (which almost all do), in which parameters are specified with qualifiers. Previously, in order to override such a value in your own config, you needed to include the wavelength. But now, you no longer need to include the wavelength.
These changes have been installed at Hilo. To use them at your locally, you will need to rsync from Hilo.
2012-10-26
Inside-out map-making
The importsky configuration parameter:
Speeding things up:
- Caching the cleaned bolometer values. If "exportclean=1" is included in the config file on the first invocation of MAKEMAP, the cleaned time-series data will be written to disk in NDFs that end with the suffix "_cln.sdf". These NDFs can be used in place of the raw data as input for all subsequent invocations of MAKEMAP. In this case you should create a new config file for the second and subsequent invocations and add "doclean=0" to it, so that the cleaned input data will not be re-cleaned.
- Caching the EXT model values. Calculation of the EXT model values happens only once, before the first iteration starts. So if you add "exportndf=ext" to the config file for the first invocation, a set of NDFs with suffix "_ext.sdf" will be created holding the EXT model values. These can be supplied as input to the second and subsequent invocations by adding "ext.import=1" to the config file. Note, MAKEMAP uses fixed pre-defined names for the NDFs when writing and reading EXT model values, so the NDFs created on the first invocation should not be moved or re-named. We also add "noexportsetbad=1" to prevent EXT values for flagged bolometers being set bad in the exported NDFs (since a different set of bolometers may potentially be flagged as bad on subsequent invocations).
Other parameter settings:
- Setting "numiter=1" is required to ensure only one iteration is performed by each invocation of MAKEMAP.
- The NOI model, which contains estimates of the noise in each bolometer time stream, is normally calculated at the end of the first iteration, once the final residuals are known. This is no good for us here since each invocation only performs one iteration. The simplest solution is to add "noi.calcfirst=1" to the configuration for every invocation. This forces the noise estimates to be made before the start of the first iteration.
- Care needs to be taken if any AST masking is used. Firstly, since we have set "numiter=1", the first iteration is also the last iteration. So we need to add "ast.zero_notlast=0" to the configuration for every invocation. Without this, no masking would be performed on any of the invocations of MAKEMAP. However, we do want to suppress masking on the final invocation, and so we should revert to the default value of "ast.zero_notlast=1" for the final invocation.
- If an external mask is to be used, it should be supplied as normal as the REF parameter on the first invocation. The output map from the first invocation will have this same mask and so will mask the AST model correctly when supplied as the REF parameter on the second and subsequent invocations.
An example:
First invocation:
noexportsetbad=1 # Export good EXT values for bad bolometers
Second invocation:
Third invocation:
Last (=fourth) invocation:
^conf2 # Inherit all the settings in "conf2"
2012-10-19
Using Multiple Masks
ast.zero_mask = 1
ast.zero_lowhits = 0.1
the AST.ZERO_MASK setting would take precedence over the AST.ZERO_LOWHITS setting, resulting in the external mask supplied via the REF parameter being used, and the AST.ZERO_LOWHITS setting ignored.
The current behaviour is now to combine such masks together. So in the above example the external mask and the lowhits mask would be combined to create a single mask. The combination is done in a manner determined by the AST.ZERO_UNION parameter. If this parameter is true (i.e. non-zero), a pixel in the combined mask is considered a "source" pixel if it is flagged as a source pixel in any of the individual masks (i.e. the combined source area is the union of the individual source areas). If AST.ZERO_UNION is false (i.e. zero) , a pixel in the combined mask is considered a "source" pixel if it is flagged as a source pixel in all of the individual masks (i.e. the combined source area is the intersection of the individual source areas). The default for AST.ZERO_UNION is 1 - that is, the union of the masks is used by default.
In addition, a different external masks can now be used with each model. This is achieved by using a new syntax for the XXX.ZERO_MASK configuration parameters. Each of these configuration parameters can now be given the name of the ADAM parameter which is to be used to get the corresponding mask. [Just to remind you, "ADAM" parameters are different to configuration parameters - all the configuration parameters are obtained using a single ADAM parameter called CONFIG.] External masks can be specified using any of the ADAM parameters "REF", "MASK2" and "MASK3" ("REF" serves the purpose of "MASK1"). The REF parameter has been around for a long time, but the MASK2 and MASK3 parameters are new.
So for instance, to use different external masks for the AST and FLT models, you could do:
ast.zero_mask = REF
flt.zero_mask = MASK2
and then run makemap as
% makemap ref=ast_mask.sdf mask2=flt_mask.sdf
Or to use the same mask for both models, you should do:
ast.zero_mask = REF
flt.zero_mask = REF
% makemap ref=mask.sdf
The old XXX.ZERO_MASK syntax is still supported. Supplying a positive integer value causes the mask to be accessed using the REF parameter, as before. The default for each XXX.ZERO_MASK parameter is still zero, meaning that no external mask is used.
One twist to beware of with the new scheme, is that the REF parameter is treated somewhat differently to the MASK2 and MASK3 parameters. The REF parameter is still used, as it always has been, to define the pixel grid of the output map. This means that any mask supplied via the REF parameter will, by definition, be aligned with the output map. The same is not true of the MASK2 and MASK3 parameters. If using these parameters, you must ensure that the mask NDFs are aligned in pixel coordinates with the output map. The easiest way to do this is to create your mask from the output map of a previous similar run of makemap.
2012-09-21
SCUBA-2 Data Reduction Manual
http://www.starlink.ac.uk/docs/sc21.htx/sc21.html
An explanation of how the map-maker works is given, including details of the default parameters which control each stage. Specialised configuration files which alter these parameters to suit different science goals are also introduced.
The cookbook guides users through all the post-processing options, from cropping and co-adding maps using PICARD, to applying an FCF and calculating the noise. The science pipeline is also discussed, with instructions for running it on a local machine.
Chapter List:
1. Introduction
2. SCUBA-2 Overview
3. Raw SCUBA-2 Data
4. The Dynamic Iterative Map-Maker
5. Reducing your Data
6. Tweaking the Configuration File
7. Examples of Different Reductions
8. The SCUBA-2 Pipeline
9. SCUBA-2 Data Calibration
Processing CLS data with ORAC-DR
How do I use it?
To use this recipe, initialize the pipeline, create a text file with the names of all the files to process and a file with recipe parameters if necessary. Then add the recipe name REDUCE_CLS to the usual command line:
% oracdr -loop file -files filenames.lis -recpars params.ini
-log sf -nodisplay REDUCE_CLS
By default the recipe uses the "blank field" makemap config file (though this may be overridden with the MAKEMAP_CONFIG recipe parameter; see below).
The recipe creates a slew of output files with the following suffices:
_fmos
- signal map (1 for each observation)_mappsf
- map-filtered PSF (1 for each observation). This is the same as the above signal map but with an artifical gaussian added to the time-series (and located at the map centre)._wmos
- coadded signal map_wpsf
- coadded map-filtered PSF_jkmap
- jack-knife map_whiten
- whitened signal map_whitepsf
- whitened, map-filtered PSF map_cal
- calibrated, whitened signal map_mf
- above map processed with matched-filter (using the_whitepsf
map as the PSF)_snr
- signal-to-noise map created from above map (_mf
)
s
, and the coadds (and subsequent products) begin with gs
.
The ones you are probably most interested in are the _mf
and _snr
files.
What does it do?
The recipe works as follows.- Each observation is processed separately to produce a signal map (
_fmos
). In addition, each observation is re-processed with an artificial gaussian source added at the map centre (this will be used to create the "map-filtered PSF" image,_mappsf
). - The signal maps are combined using inverse-variance weighting to create a coadded signal map (
_wmos
). The images with the artificial gaussians added in are also coadded to produce the "map-filtered PSF" (_wpsf
). - The data are split into two groups made from alternating observations. Each group is coadded, and the two coadds are subtracted to produce the jack-knife map (
_jkmap
). - The SMURF command
sc2filtermap
is used to estimate the radial angular power spectrum (circular symmetry is assumed) within a region defined by twice the minimum noise in the coadded signal map. (The size of this region is written to the FITS header of the output map under the keyword WHITEBOX.) - The inverse of this angular power spectrum is applied to the coadded signal and PSF images (
_whiten
and_whitepsf
) to remove residual low spatial-frequency noise. - The amplitude of the fake source in
_whitepsf
is compared with the input value to derive a corrected FCF. This new FCF is used to calibrate the whitened signal map to create the_cal
map. - A matched filter is applied to
_cal
using_whitepsf
as the PSF to create_mf
. - Finally, a signal-to-noise ratio image is created (
_snr
).
What parameters are available?
The following recipe parameters can be used to control the processing:- FAKEMAP_SCALE - Amplitude of the fake source (in Jy) added to the timeseries to assess the map-making response to a point source.
- MAKEMAP_CONFIG - Name of a config file for use with the SMURF makemap task. The file must exist in the current working directory, $MAKEMAP_CONFIG_DIR, $ORAC_DATA_OUT, $ORAC_DATA_CAL or $STARLINK_DIR/share/smurf.
- MAKEMAP_PIXSIZE - Pixel size in arcsec for the output map. Default is wavelength dependent (4 arcsec at 850 um, 2 arcsec at 450 um).
- WHITEN_BOX - Size of the region used to calculate the angular power spectrum for removing residual low-frequency noise in the data. Default is a square region bounded by the noise being less than twice the minimum value.
What if I want to run it again?
In addition to the full pipeline recipe, there is a PICARD recipe called SCUBA2_JACKKNIFE which performs all of the post-map-making steps. This can be used to examine the influence of including or omitting individual observations (say ones with visible artefacts that the pipeline is not able to trap), or investigate the effect of varying the size of the whitening region, or trim the images to a specific size before re-running the jack-knife steps. Note that the pipeline must have been run once to produce all the necessary files which go in to this recipe. And note that the full pipeline recipe must be run again if a different config file or input gaussian amplitude is to be used. The majority of the flexibility in controlling the processing occurs after the individual signal maps have been created. As a refresher, running this recipe would mean typing:
% picard -log sf -recpars params.ini SCUBA2_JACKKNIFE myfiles*.sdf
All of the control is through the recipe parameters in the file params.ini. The most important item to note is that you must provide a map-filtered PSF (otherwise a default will be used). However, you already have one of these: the file ending _mappsf.sdf
listed above.
- PSF_MATCHFILTER - the name of the map-filtered PSF file
- WHITEN_BOX - Size of the region used to calculate the angular power spectrum for removing residual low-frequency noise in the data. Default is a square region bounded by the noise being less than twice the minimum value.
2012-09-12
FIT1D: A new SMURF command for ACSIS data
A section has been added to the SMURF documentation in SUN/258 about FIT1D, which I will try to summarize here. FIT1D is generic in that it can fit profiles along any axis of an up to 7-dim hyper-cube, but will be discussed here in the context of a default RA-Dec-Vel ACSIS cube. Note that the routine assumes that data have been baseline-subtracted, using e.g. MFITTREND, i.e. that the profiles have a zero-level at 0.
Non-gaussian Profiles.
Because of their ability to fit distorted shapes, Gauss-Hermites are particularly well suited to "capture" the maximum amount of emission from a cube. The fits can be remarkably accurate as is shown in the the figure below showing a 3-component fit (i.e. up to 3 spectral lines) using gausshermite2 functions (i.e. fitting both h3 and h4). Collapsing the resulting cube with fitted profiles can thus result in an accurate and almost noise-free white-light or total-emission map.
Fit1d - Black: original profiles; Red: results of a 3-component Gauss-Hermite2 fit (fitting both h3 and h4) |
FIT1D derives its ability to fit a complex line-shape both from the Gauss-Hermite function but also from that it can fit multiple (sub) components to get the best match possible. However, that can make the interpretation of the fits in terms of the physical characteristics and quantities difficult, hence for those you may also want to make a fit of the line-shape using a single standard Gaussian function.
Component Parameter files
Much of the (anticipated) use of FIT1D derives from the fact that Component parameter files can be used as input as well: either to provide initial estimates or fixed values to the fitting routine. The difference between values specified in the Component parameter files
and ones declared in a User parameter values file is that the former can vary across the field-of-view whereas the latter will result in the same value being used for all profiles. E.g. for use with spectral-line surveys the User parameter values file can be used to provide initial estimates of the frequencies or velocities at which lines are expected or to fix fits at those frequencies.
By manipulating Component parameter files e.g. resulting from an initial fit, the user can customize or correct subsequent fits. In extrema, a Component parameter file could be made from scratch based on a model and be used to create a spectral-line data-cube with that model (config option: model_only=1) or be used as initial estimates for a fit. Of more practical use, Component parameter files can be used to correct problems associated with a fit since the art of fitting is not in the fitting algorithm, but in providing accurate initial estimates. For instance, the left image below shows a section of an Amplitude plane of a fit where there are problems in a few locations. Setting these location to bad values and using FILLBAD to interpolate over them, the corrected Component parameter file was used as initial estimate for a subsequent fit, resulting in the image on the right
2012-08-23
SCUBA-2 reference publications webpage
http://www.jach.hawaii.edu/JCMT/continuum/scuba2/scuba2_references.html
This includes an arxiv link to the Dempsey et al 2012 SPIE paper on SCUBA-2 commissioning which can be used as the reference for SCUBA-2 calibration.
http://arxiv.org/abs/1208.4622
A trilogy of up-to-date SCUBA-2 papers on the instrument, data-reduction and calibration are currently ready for submission. Please check back to the above link periodically. The new links will be added when they are in press.
2012-08-16
Updates to FCFs and extinction correction
We have set up a web page at JAC listing the parameters that should be used and instructions on how to determine which version of the software was used to generate your map:
http://www.jach.hawaii.edu/JCMT/continuum/scuba2/scuba2_relations.html
2012-06-17
Displaying an outline of the mask used to create a map
% kappa
% lutgrey
% display map
% contour clear=no mode=free heights=0.5 comp=quality map style='colour=red'
This will display the final map as a greyscale image with the mask outlined in red. Other properites of the mask outline (colour, line thickness, etc) can be specified by including other options in the "style" parameter value when running contour.
2012-05-31
Preventing self-masks from diverging
2012-05-25
Better core utilisation
2012-05-17
Changes to the common-mode estimation
For extended sources, MAKEMAP usually uses the mean normalised change in map pixel values between iterations as a measure of convergence. This value drops rapidly over the first few iterations, and eventually levels out, typically somewhere between 0.01 and 0.05. However, if MAKEMAP is allowed to continue iterating beyond this point, the mean normalised change between iterations often starts to increase and decrease by large amounts in an erratic manner. Examining the maps shows that these changes reflect the creation of nasty artifacts in the map, which often take the form of individual bolometer tracks. As an example, Dave Nutter saw the following variations in mean normalised change with iterations for one particular observation:
The cause of this wild behaviour seems to be the way the common-mode signal is estimated and used. On each iteration, the common mode value at each time slice is taken to be the mean of the bolometer values at that time slice, averaged over all remaining bolometers (i.e. bolometers that have not previously been flagged as unusable for one reason or another). Each bolometer time series is then split into blocks of 30 seconds and compared to the corresponding block of the common-mode signal. This comparison takes the form of a least squares linear fit between the bolometer value and the common-mode value, and generates a gain, offset and correlation coefficient for each bolometer block. Bolometer blocks that look "unusual" - either because they have a low correlation coefficient (i.e. do not look like the common mode) or an unusually low or high gain - are flagged, and all the bolometer values within such blocks are rejected from all further processing. So on the next iteration, the common mode within each block is estimated from only the subset of bolometers that passed this check on all previous iterations. With the passing of iterations, more and more bolometer blocks are rejected, often resulting in the common mode in a block being estimated from a completely different subset of bolometers to those of the neighbouring blocks. This produces discontinuities in the common-mode signal at the 30 second block boundaries, which get larger as more iterations are performed. When the common-mode is subtracted from the data, these discontinuities are transferred into the residuals, which are then filtered within the FLT model. The sharp edges at the discontinuities cause the FFT filtering to introduce ringing (strong oscillations on the scale length of the filter that extend over long times). The following plot shows an example of these discontinuities in the common-mode estimate:
The horizontal axis is sample index. The red curve is the common mode at iteration 85 and the white curve is the common mode at iteration 99. Due to down-sampling, 30 seconds corresponds to 2400 samples, and discontinuities at intervals of 2400 samples are clearly visible. In addition, extensive oscillations are visible, particularly between samples 48000 and 50000.
In order to avoid this problem, changes to the way the common-mode is estimated and used have been made. The heart of the change is that bolometer blocks that are flagged as unusual are no longer excluded from the estimation of the common-mode on the next iteration - all such flags are cleared before the new COM estimate is made. So the common mode is always based on data from all bolometers, thus avoiding the discontinuities at the block boundaries. Other more minor changes in the code include:
- The common-mode value at each time slice can be estimated using a sigma-clipped mean, rather than a simple mean. The new configuration parameters COM.NITER (the number of iterations) and COM.NSIGMA (the number of standard deviations at which to clip on each iteration) control this algorithm. In practice, sigma-clipping does not seem to add much, and may slow down convergence. At the moment, both of these parameters default to 3. Changing COM.NITER to 1 results in a single simple mean being calculated with no clipping. This may become the default in the near future.
- No attempts are now made to refine the common mode. Previously, once the original COM signal was estimated and unusual bolo-blocks were flagged, a refined COM signal was formed by excluding the flagged bolo-blocks. Further unusual bolo-blocks were then flagged by comparing them with the refined COM signal. This refinement process was repeated until no further bolo-blocks were rejected. In the new code, no attempts are made to refine the original COM signal resulting in the new algorithm being significantly faster than the old algorithm.
The new algorithm seems effective at overcoming the problem of divergence illustrated at the start of this post. For instance, with the new code the variation of mean normalised change between iterations for the same data looks as follows:
Note that:
- The wild variations above iteration 80 have gone.
- The curve drops somewhat faster at the start (i.e. fewer iterations are needed to achieve a given maptol).
- Each estimation of the common-mode signal is faster than before due to the lack of the refining process.
- Far fewer samples are rejected from the final map ( 2.38 % of the data is now flagged by the COM model, as opposed to 17.37 % previously).
- The resulting maps at iteration 50 and 200 are shown below. There is very little difference between them, and they both look visually like the iteration 50 map from the old algorithm shown above.
2012-05-01
Interrupting makemap using control-C
If you want to abort NOW! without a map, rather than waiting for the next iteration to complete, then press control-C a second time.
Note, if you are running makemap within a shell script, then the shell may handle the control-C signal itself, leading to some potentially odd behaviour. For instance, the script may appear to terminate immediately, but in fact may leave the makemap process running in the background until the next iteration has completed. Most shells have ways of controlling what happens when an interrupt signal is detected. For instance, the (t)csh has the "onintr" command.
2012-04-17
Masking of FLT and COM models
2012-02-03
SCUBA-2 Calibration: REDUX.
- The heater coupling factors have been adjusted to more realistic values. In practice, this does not change the performance of the instrument - however it does change the absolute value of the FCFs. These values were adjusted in the software in mid-December.
- The WVM tau algorithm has been fixed and improved. This will not affect you directly: though the nightly plots now look extremely good and are officially used for weather band determination.
- This has allowed new, and better calculation of the relation between the 225GHz tau derived from the WVM and the opacities at the two SCUBA-2 filter-bands. They are now as follows:
TAU_[450] = 26.0 * (TAU_[225] - 0.019)
- The FCFs (flux conversion factors) have been derived for both wavelengths from an extensive reduction of calibrator sources observed over eight months of SCUBA-2 commissioning and science verification observations. They are as follows:
FCF_[arcsec] = 2.42 +/- 0.15 Jy/pW/arcsec**2
FCF_[peak] = 556 +/- 45 Jy/pW/beam
Beam area = 229 arcsec**2
450um:
FCF_[arcsec] = 6.06 +/- 0.32 Jy/pW/arcsec**2
FCF_[peak] = 606 +/- 55 Jy/pW/beam
Beam area = 97 arcsec**2
- Reminder on how to calibrate your data:
Other posts discuss how best to reduce your data (and what recipes are needed). The latest software releases (since January 2012) all include extinction correction (with the relations above) and the changed coupling factors. If you reduced your data prior to this, you will need to reduce them again to account for these changes. Applying the FCFs reported here to old reductions of your data will be wrong.
- The arcsec FCF: (when you want integrated fluxes)
The arcsec FCF is the factor by which you should multiply your map if you wish to use the calibrated map to do aperture photometry.
- The peak FCF: (the FCF-formerly-known-as-beam):
This FCF is the number by which to multiply your map when you wish to measure absolute peak fluxes of discrete sources.
The whys and the wherefores (the gory details):
- Reductions of the calibration observations:
Uranus and Mars were used as the primary calibrators for these results. In addition, CRL 618, CRL 2688 were also predominant secondary calibrators. All of the calibrators were reduced in January 2012 using the updated dimmconfig_bright_compact.lis. The improvements in the recipe (mostly in how it chooses to stop iterating) have resulted in extremely flat maps with nearly no evidence of the 'bowling' seen around strong sources during S2SRO reductions. To improve the accuracy of the peak-fitting and aperture photometry, the maps were reduced with 1 arcsecond pixels at both wavelengths.
Once the maps were reduced they were then analysed using the PICARD script SCUBA2_FCFNEFD. There have been some changes to this script: some few bugs have been fixed, the average FCF's have been adjusted, as well as the (now emprically derived) beam area, and a few reference fluxes have been adjusted. FCF_beamequiv has been removed entirely, and all calculations of integrated values are now done using AUTOPHOTOM, with a defined aperture and annulus for background subtraction.
Following extensive analysis of the most optimal parameters, all calibrations were reduced using a 60" diameter aperture (at both wavelengths) with an annulus between 90" and 120" from the source position.
- Questions? Let's provide answers to a few we've already seen:
- "These FCFs are very different to the old numbers quoted!"
Yes they are. The heater coupling factor change and the new tau relations play a significant part in this. But in addition, the large sample of observations has allowed for a much more accurate determination of the beam area. At 450um in particular, optical effects of the telescope show that the error beam is large and the beam is not gaussian. This results in an effective FWHM that is much broader than the 7.5" quoted previously (though that is the approximate FWHM of the fit to the centre of the beam) - it is more like 9.5", taking into account the error beam. Therefore, the measured (and fitted) peak is relatively lower, requiring a higher FCF to calibrate the peak flux in your data.
- "There is a lot of scatter when I calculate the 'beam' or peak FCFs for my calibrators (particularly at 450um)"
No kidding. Peak values are obtained either from reading off the peak of the map (in gaia or by another method) or by fitting to the peak using beamfit (as is done in PICARD). The beam shape (particularly at 450um) can be extremely susceptible to changes in focus and atmospheric instability, amongst other things. The integrated value (FCF_arcsec) is more robust against such changes. If you are measuring a peak fit from a calibrator and see a strong deviation from the expected value things to check are:
- how 'focussed' does the image look? If you see distortion in the shape of a source that should be point-like, or distortion or 'shoulders' in the beam then it is likely that the peak value will be unreliable.
- was the observation taken early in the evening? Focus and atmospheric effects are known to be worst in the early evening hours and sometimes in the morning after sunrise. If you are looking at calibrators, try and look at ones taken later in the night and see if there is improvement.
A 'trap' has been set in PICARD to warn you if the attempted fit to the peak misses the actual peak value by more than 10%. Looking at the fit to the shape also helps in this instance. In any case, the quoted peak FCF value at the top of the post is derived from the arcsec FCF and the empirical beam area derived from nearly 500 observations at both wavelengths and this number has been shown to be robust.
- "How stable are these FCFs? (read: do I need to reduce my own calibrators?)"
Very. The absolute errors at both wavelengths are within 5% and no significant trends have been seen in the last six months. Instrument performance is being monitored very closely and any deviations are likely to be noted specifically. However, we do not discourage you taking calibrators from the nights your data was taken and reducing them yourself - we appreciate the sanity checks! Another handy rule: if you do it to your data, do it to your calibrator. If you have specific methods you plan to use on your data, apply the same methods to your calibrator in order to ensure your calibration is correct. We are now happy to say though, that these FCFs look stable and correct, so using these numbers should provide you with well-calibrated data.
- "What happened to FCF_beamequiv?"
FCF_beamequiv is a seductive, evil little value that tempted us to stray to the dark side, albeit temporarily. In essence it was created to use as a comparison to SCUBA performance, but should never have been used to actively calibrate SCUBA-2 data as it assumed a perfect gaussian beam. The statements above explain that this is patently untrue, especially at 450um. The beamequiv number was quoted previously, and incorrectly, as the true FCF, and it is largely the reason that the new (and correct) numbers seem so much larger. We have banished it from PICARD and it shall now be known as the FCF-that-shall-not-be-named.