Introduction

Samplesheet input

You will need to create a samplesheet with information about the samples you would like to analyse before running the pipeline. Use this parameter to specify its location. It has to be a comma-separated file with 3 columns, and a header row as shown in the examples below.

--input '[path to samplesheet file]'

Samplesheet header:

samplesheet.csv
sample_id,img_directory,parameter_file
ColumnDescription
sample_idCustom sample name.
img_directoryFull path to the image directory for the sample.
parameter_fileFull path to the corresponding parameter_file for the analysis.

Single or Multiple samples

The pipline always takes the samplesheet as input. For processing only one sample, you would only specify one sample in the samplesheet. The samplesheet below shows an example for processing multiple samples with the pipeline.

samplesheet.csv
sample_id,img_directory,parameter_file
TEST1,/path/to/TEST1/,/path/to/params_TEST1.csv
TEST2,/path/to/TEST2/,/path/to/params_TEST2.csv
TEST3,/path/to/TEST3/,/path/to/params_TEST3.csv

If different samples should be processed with the same parameter set specified in the params.csv, you can use the same params.csv for different samples.

Parameter file

In the parameter.csv file you should specify processing parameters for your data and pipeline run. The CSV contains specific fields that are needed for the processes to run and only the value column should be modifyed. You can download a template parameter file here. An example row is displayed below:

params.csv
Parameter,Value
z_window,5

The individual parameters are explained here

Analysis specific parameters

This section descripbes every parameter that can be set in the parameter.csv. In order that the pipeline runs correctly all named parameters need to be present in the parameter file and its recommended to use the provided parameter file (link). Every parameter has a default value that will be set if not otherwise defined in the parameter.csv.

ParameterDescription
darkfield_intensity1xn_channels; Constant darkfield intensity value (i.e. average intensity of image with nothing present). Default: 101
img_directory
single_sheettrue, false; Whether a single sheet was used for acquisition
ls_width1xn_channels interger. Light sheet width setting for UltraMicroscope II as percentage. Default: 50
laser_y_displacement[-0.5,0.5]; Displacement of light-sheet along y axis. Value of 0.5 means light-sheet center is positioned at the top of the image. Default: 0
sampling_frequency[0,1]; Fraction of images to read and sample from. Setting to 1 means use all images. Default: 0.2
shading_correction_tilesInteger vector. Subset tile positions for calculating shading correction (row major order). It’s recommended that bright regions are avoided
shading_smoothnessnumeric >= 1; Factor for adjusting smoothness of shading correction. Greater values lead to a smoother flatfield image. Default: 2
shading_intensitynumeric >= 1; Factor for adjusting the total effect of shading correction. Greater values lead to a smaller overall adjustment. Default: 1
update_z_adjustmenttrue, false; Update z adjusment steps with new parameters. Otherwise pipeline will search for previously calculated parameters. Default: false
z_positionsinteger or numeric; Sampling positions along adjacent image stacks to determine z displacement. If <1, uses fraction of all images. Set to 0 for no adjustment, only if you’re confident tiles are aligned along z dimension. Default: 0.01
z_windowinteger; Search window for finding corresponding tiles (i.e. +/-n z positions). Default: 5
z_initial1xn_channels-1 interger; Predicted initial z displacement between reference channel and secondary channel. Default: 0
align_methodelastix, translation; Channel alignment by rigid, 2D translation or non-rigid B-splines using elastix. Default: translation
align_tilesOption to align only certain stacks and not all stacks. Row-major order. Default: ”
align_channelsOption to align only certain channels (set to >1). Default: ”
align_slicesOption to align only certain slice ranges. Set as cell array for non-continuous ranges (i.e. {1:100,200:300}). Default: ”
align_stepsizeinterger; Only for alignment by translation. Number of images sampled for determining translations. Images in between are interpolated. Default: 5
only_pctrue, false; Use only phase correlation for registration. This gives only a quick estimate for channel alignment. Default: false
align_chunksOnly for alignment by elastix. Option to align only certain chunks. Default: ”
elastix_params1xn_channels-1 string; Name of folders containing elastix registration parameters. Place in /supplementary_data/elastix_parameter_files/channel_alignment. Default: 32_bins
pre_aligntrue, false; (Experimental) Option to pre-align using translation method prior to non-linear registration. Default: false
max_chunk_sizeinteger; Chunk size for elastix alignment. Decreasing may improve precision but can give spurious results. Default: 300
chunk_padinteger; Padding around chunks. Should be set to value greater than the maximum expected translation in z. Default: 30
mask_int_thresholdnumeric; Mask intensity threshold for choosing signal pixels in elastix channel alignment. Leave empty to calculate automatically. Default: ”
resample_s1x3 integer. Amount of downsampling along each axis. Some downsampling, ideally close to isotropic resolution, is recommended. Default: 3;3;1
hist_match1xn_channels-1 interger; Match histogram bins to reference channel? If so, specify number of bins. Otherwise leave empty or set to 0. This can be useful for low contrast images. Default: 64
sift_refinementtrue, false; Refine stitching using SIFT algorithm (requires vl_fleat toolbox). Default: true
load_alignment_paramstrue, false; Apply channel alignment translations during stitching. Default: true
overlap0:1; overlap between tiles as fraction. Default: 0.20
stitch_sub_stackz positions; If only stitching a cetrain z range from all the images. Default: ”
stitch_sub_channelchannel index; If only stitching certain channels. Default: ”
stitch_start_slicez index; Start stitching from specific position. Otherwise this will be optimized. Default: ”
blending_methodsigmoid, linear, max. Default: sigmoid
sd0:1; Recommended: ~0.05. Steepness of sigmoid-based blending. Larger values give more block-like blending. Default: 0.05
border_padinteger >= 0; Crops borders during stitching. Increase if images shift significantly between channels to prevent zeros values from entering stitched image. Default: 25
rescale_intensitiestrue, false; Rescaling intensities and applying gamma. Default: false
lowerThresh1xn_channels numeric; Lower intensity for rescaling. Default: ”
signalThresh1xn_channels numeric; Rough estimate for minimal intensity for features of interest. Default: ”
upperThresh1xn_channels numeric; Upper intensity for rescaling. Default: ”
Gamma1xn_channels numeric; Gamma intensity adjustment. Default: ”
subtract_backgroundtrue, false. Subtrat background (similar to Fiji’s rolling ball background subtraction).Default: false
nuc_radiusnumeric >= 1; Max radius of cell nuclei along x/y in pixels. Required also for DoG filtering.Default: 13
DoG_imgtrue,false; Apply difference of gaussian enhancement of blobs.Default: false
DoG_minmax1x2 numeric; Min/max sigma values to take differene from.Default: 0.8;2
DoG_factor[0,1]; Factor controlling amount of adjustment to apply. Set to 1 for absolute DoG.Default: 1
smooth_img1xn_channels, “gaussian”, “median”, “guided”. Apply a smoothing filter.Default: false
smooth_sigma1xn_channels numeric; Size of smoothing kernel. For median and guided filters, it is the dimension of the kernel size. Default: ”
flip_axis”none”, “horizontal”, “vertical”, “both”; Flip image along horizontal or vertical axis.Default: none
rotate_axis0, 90 or -90; Rotate image.Default: 0
groupGroup name/id.Default: TEST;WT;R1
channel_numChannel id.Default: C01;C00
markersName of markers present.Default: topro;ctip2
position_exp1x3 string of regular expression specifying image row(y), column(x), slice(z).Default: [\d;\d];Z\d**
resolutionImage reolution in um/voxel.Default: ”
orientation1x3 string specifying sample orientation. Default: ail
hemisphere”left”,“right”,“both”,“none”. Default: left
channel_alignmenttrue, update, false; Channel alignment. Default: true
adjust_intensitytrue, update, false; Whether to calculate and apply any of the following intensity adjustments. Intensity adjustment measurements should typically be performed on raw images. Default: update
stitch_imagestrue, update, false; 2D iterative stitching. Default: true
use_processed_imagesfalse or name of sub-directory in output directory (i.e. aligned, stitched…); Load previously processed images in output directory as input images. Default: false
ignore_markerscompletely ignore marker from processing steps. Default: Auto
save_imagestrue or false; Save images during processing. Otherwise only parameters will be calculated and saved. Default: true
save_samplestrue, false; Save sample results for each major step. Default: true
adjust_tile_shadingbasic, manual, false; Can be 1xn_channels. Perform shading correction using BaSIC algorithm or using manual measurements from UMII microscope. Default: basic
adjust_tile_positiontrue, false; Can be 1xn_channels. Normalize tile intensities by position using overlapping regions. Default: true
resample_imagestrue, update, false; Perform image resampling. Default: true
register_imagestrue, update, false; Register image to reference atlas. Default: true
count_nucleitrue, update, false; Count cell nuclei or other blob objects.Default: true
classify_cellstrue, update, false; Classify cell-types for detected nuclei centroids. Default: false
resample_resolutionIsotropic resample resolution. This is also the resolution at which registration is performed. Default: 25
resample_channelsResample specific channels. If empty, only registration channels will be resampled. Default: ”
use_annotation_masktrue, false; Use annotation mask for cell counting. Default: false
annotation_mappingatlas, image; Specify whether annotation file is mapped to the atlas or light-sheet image. Default: atlas
annotation_fileFile for storing structure annotation data. Default: ”
annotation_resolutionIsotropic resolution of the annotation file. Only needed when mapping is to the image. Default: 25
registration_directionatlas_to_image, image_to_atlas; Direction to perform registration. Default: atlas_to_image
registration_parametersdefault, points, or name of folder containing elastix registration parameters in /data/elastix_parameter_files/atlas_registration. Default: default
registration_channelsinteger; Which light-sheet channels to register. Can select more than 1. Default: 1
registration_prealignmentimage. Pre-align multiple light-sheet images by rigid transformation prior to registration. Default: image
atlas_fileara_nissl_25.nii and/or average_template_25.nii and/or a specific atlas .nii file in /data/atlas. Default: 3Drecon-ADMBA-P4_atlasVolume.nii
use_pointsUse points during registration. Default: false
prealign_annotation_indexNot used. Default: ”
points_fileName of points file to guide registration. Default: ”
save_registered_imagesWhether to save registered images. Default: true
mask_cerebellum_olfactoryRemove olfactory bulbs and cerebellum from atlas ROI. Default: true
count_methodDefault: 3dunet
int_thresholdMinimum intensity of positive cells. Default: 200
model_fileModel file name. Default: ”
gpuCuda visible device index. Default: 0
chunk_size’Chunk size in voxels. Default: [112, 112, 32]
chunk_overlapOverlap between chunks in voxels. Default: [16, 16, 8]
pred_thresholdPrediction threshold. Default: 0.5
normalize_intensityWhether to normalize intensities using min/max. Default: true
resample_chunksWhether to resample image to match trained image resolution. Note: increases computation time. Default: false
tree_radiusPixel radius for removing centroids near each other. Default: 2
acquired_img_resolutionResolution of acquired images. Default: [0.75, 0.75, 4]
trained_img_resolutionResolution of images the model was trained on. Default: [0.75, 0.75, 2.5]
measure_colocMeasure intensity of co-localizaed channels. Default: false
n_channelsNumber of channels. Default: ”
use_maskUse mask. Default: false
mask_fileMask file. Default: ”
resample_resolutionResolution of resampled images. Default: 25

Running the pipeline

The typical command for running the pipeline is as follows:

nextflow run nf-core/lsmquant --input ./samplesheet.csv --outdir ./results -profile docker

This will launch the pipeline with the docker configuration profile. See below for more information about profiles.

Note that the pipeline will create the following files in your working directory:

work                # Directory containing the nextflow working files
<OUTDIR>            # Finished results in specified location (defined with --outdir)
.nextflow_log       # Log file from Nextflow
# Other nextflow hidden files, eg. history of pipeline runs and old logs.

If you wish to repeatedly use the same parameters for multiple runs, rather than specifying each flag in the command, you can specify these in a params file.

Pipeline settings can be provided in a yaml or json file via -params-file <file>.

Warning

Do not use -c <file> to specify parameters as this will result in errors. Custom config files specified with -c must only be used for tuning process resource specifications, other infrastructural tweaks (such as output directories), or module arguments (args).

The above pipeline run specified with a params file in yaml format:

nextflow run nf-core/lsmquant -profile docker -params-file params.yaml

with:

params.yaml
input: './samplesheet.csv'
outdir: './results/'
<...>

You can also generate such YAML/JSON files via nf-core/launch.

Updating the pipeline

When you run the above command, Nextflow automatically pulls the pipeline code from GitHub and stores it as a cached version. When running the pipeline after this, it will always use the cached version if available - even if the pipeline has been updated since. To make sure that you’re running the latest version of the pipeline, make sure that you regularly update the cached version of the pipeline:

nextflow pull nf-core/lsmquant

Reproducibility

It is a good idea to specify the pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you’ll be running the same version of the pipeline, even if there have been changes to the code since.

First, go to the nf-core/lsmquant releases page and find the latest pipeline version - numeric only (eg. 1.3.1). Then specify this when running the pipeline with -r (one hyphen) - eg. -r 1.3.1. Of course, you can switch to another version by changing the number after the -r flag.

This version number will be logged in reports when you run the pipeline, so that you’ll know what you used when you look back in the future. For example, at the bottom of the MultiQC reports.

To further assist in reproducibility, you can use share and reuse parameter files to repeat pipeline runs with the same settings without having to write out a command with every single parameter.

Tip

If you wish to share such profile (such as upload as supplementary material for academic publications), make sure to NOT include cluster specific paths to files, nor institutional specific profiles.

Core Nextflow arguments

Note

These options are part of Nextflow and use a single hyphen (pipeline parameters use a double-hyphen)

-profile

Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments.

Several generic profiles are bundled with the pipeline which instruct the pipeline to use software packaged using different methods (Docker, Singularity, Podman, Shifter, Charliecloud, Apptainer, Conda) - see below.

Important

We highly recommend the use of Docker or Singularity containers for full pipeline reproducibility, however when this is not possible, Conda is also supported.

The pipeline also dynamically loads configurations from https://github.com/nf-core/configs when it runs, making multiple config profiles for various institutional clusters available at run time. For more information and to check if your system is supported, please see the nf-core/configs documentation.

Note that multiple profiles can be loaded, for example: -profile test,docker - the order of arguments is important! They are loaded in sequence, so later profiles can overwrite earlier profiles.

If -profile is not specified, the pipeline will run locally and expect all software to be installed and available on the PATH. This is not recommended, since it can lead to different results on different machines dependent on the computer environment.

  • test
    • A profile with a complete configuration for automated testing
    • Includes links to test data so needs no other parameters
  • docker
    • A generic configuration profile to be used with Docker
  • singularity
    • A generic configuration profile to be used with Singularity
  • podman
    • A generic configuration profile to be used with Podman
  • shifter
    • A generic configuration profile to be used with Shifter
  • charliecloud
    • A generic configuration profile to be used with Charliecloud
  • apptainer
    • A generic configuration profile to be used with Apptainer
  • wave
    • A generic configuration profile to enable Wave containers. Use together with one of the above (requires Nextflow 24.03.0-edge or later).
  • conda
    • A generic configuration profile to be used with Conda. Please only use Conda as a last resort i.e. when it’s not possible to run the pipeline with Docker, Singularity, Podman, Shifter, Charliecloud, or Apptainer.

-resume

Specify this when restarting a pipeline. Nextflow will use cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously. For input to be considered the same, not only the names must be identical but the files’ contents as well. For more info about this parameter, see this blog post.

You can also supply a run name to resume a specific run: -resume [run-name]. Use the nextflow log command to show previous run names.

-c

Specify the path to a specific config file (this is a core Nextflow command). See the nf-core website documentation for more information.

Custom configuration

Resource requests

Whilst the default requirements set within the pipeline will hopefully work for most people and with most input data, you may find that you want to customise the compute resources that the pipeline requests. Each step in the pipeline has a default set of requirements for number of CPUs, memory and time. For most of the pipeline steps, if the job exits with any of the error codes specified here it will automatically be resubmitted with higher resources request (2 x original, then 3 x original). If it still fails after the third attempt then the pipeline execution is stopped.

To change the resource requests, please see the max resources and tuning workflow resources section of the nf-core website.

Custom Containers

In some cases, you may wish to change the container or conda environment used by a pipeline steps for a particular tool. By default, nf-core pipelines use containers and software from the biocontainers or bioconda projects. However, in some cases the pipeline specified version maybe out of date.

To use a different container from the default container or conda environment specified in a pipeline, please see the updating tool versions section of the nf-core website.

Custom Tool Arguments

A pipeline might not always support every possible argument or option of a particular tool used in pipeline. Fortunately, nf-core pipelines provide some freedom to users to insert additional parameters that the pipeline does not include by default.

To learn how to provide additional arguments to a particular tool of the pipeline, please see the customising tool arguments section of the nf-core website.

nf-core/configs

In most cases, you will only need to create a custom config as a one-off but if you and others within your organisation are likely to be running nf-core pipelines regularly and need to use the same settings regularly it may be a good idea to request that your custom config file is uploaded to the nf-core/configs git repository. Before you do this please can you test that the config file works with your pipeline of choice using the -c parameter. You can then create a pull request to the nf-core/configs repository with the addition of your config file, associated documentation file (see examples in nf-core/configs/docs), and amending nfcore_custom.config to include your custom profile.

See the main Nextflow documentation for more information about creating your own configuration files.

If you have any questions or issues please send us a message on Slack on the #configs channel.

Running in the background

Nextflow handles job submissions and supervises the running jobs. The Nextflow process must run until the pipeline is finished.

The Nextflow -bg flag launches Nextflow in the background, detached from your terminal so that the workflow does not stop if you log out of your session. The logs are saved to a file.

Alternatively, you can use screen / tmux or similar tool to create a detached session which you can log back into at a later time. Some HPC setups also allow you to run nextflow within a cluster job submitted your job scheduler (from where it submits more jobs).

Nextflow memory requirements

In some cases, the Nextflow Java virtual machines can start to request a large amount of memory. We recommend adding the following line to your environment to limit this (typically in ~/.bashrc or ~./bash_profile):

NXF_OPTS='-Xms1g -Xmx4g'