Installation

To start using the nf-core/slamseq pipeline, follow the steps below:

  1. Install Nextflow
  2. Install the pipeline
  3. Pipeline configuration
  4. Reference genomes
  5. Appendices

1) Install NextFlow

Nextflow runs on most POSIX systems (Linux, Mac OSX etc). It can be installed by running the following commands:

# Make sure that Java v8+ is installed:
java -version

# Install Nextflow
curl -fsSL get.nextflow.io | bash

# Add Nextflow binary to your PATH:
mv nextflow ~/bin/
# OR system-wide installation:
# sudo mv nextflow /usr/local/bin

See nextflow.io for further instructions on how to install and configure Nextflow.

2) Install the pipeline

2.1) Automatic

This pipeline itself needs no installation - NextFlow will automatically fetch it from GitHub if nf-core/slamseq is specified as the pipeline name.

2.2) Offline

The above method requires an internet connection so that Nextflow can download the pipeline files. If you're running on a system that has no internet connection, you'll need to download and transfer the pipeline files manually:

wget https://github.com/nf-core/slamseq/archive/master.zip
mkdir -p ~/my-pipelines/nf-core/
unzip master.zip -d ~/my-pipelines/nf-core/
cd ~/my_data/
nextflow run ~/my-pipelines/nf-core/slamseq-master

To stop nextflow from looking for updates online, you can tell it to run in offline mode by specifying the following environment variable in your ~/.bashrc file:

export NXF_OFFLINE='TRUE'

2.3) Development

If you would like to make changes to the pipeline, it's best to make a fork on GitHub and then clone the files. Once cloned you can run the pipeline directly as above.

3) Pipeline configuration

By default, the pipeline runs with the standard configuration profile. This uses a number of sensible defaults for process requirements and is suitable for running on a simple (if powerful!) basic server. You can see this configuration in conf/base.config.

Be warned of two important points about this default configuration:

  1. The default profile uses the local executor
    • All jobs are run in the login session. If you're using a simple server, this may be fine. If you're using a compute cluster, this is bad as all jobs will run on the head node.
    • See the nextflow docs for information about running with other hardware backends. Most job scheduler systems are natively supported.
  2. Nextflow will expect all software to be installed and available on the PATH

3.1) Software deps: Docker

First, install docker on your system: Docker Installation Instructions

Then, running the pipeline with the option -profile standard,docker tells Nextflow to enable Docker for this run. An image containing all of the software requirements will be automatically fetched and used from dockerhub (https://hub.docker.com/r/nfcore/slamseq).

3.1) Software deps: Singularity

If you're not able to use Docker then Singularity is a great alternative. The process is very similar: running the pipeline with the option -profile standard,singularity tells Nextflow to enable singularity for this run. An image containing all of the software requirements will be automatically fetched and used from singularity hub.

If running offline with Singularity, you'll need to download and transfer the Singularity image first:

singularity pull --name nf-core-slamseq.simg shub://nf-core/slamseq

Once transferred, use -with-singularity and specify the path to the image file:

nextflow run /path/to/nf-core-slamseq -with-singularity nf-core-slamseq.simg

Remember to pull updated versions of the singularity image if you update the pipeline.

3.2) Software deps: conda

If you're not able to use Docker or Singularity, you can instead use conda to manage the software requirements. This is slower and less reproducible than the above, but is still better than having to install all requirements yourself! The pipeline ships with a conda environment file and nextflow has built-in support for this. To use it first ensure that you have conda installed (we recommend miniconda), then follow the same pattern as above and use the flag -profile standard,conda

Appendices

Running on UPPMAX

To run the pipeline on the Swedish UPPMAX clusters (rackham, irma, bianca etc), use the command line flag -profile uppmax. This tells Nextflow to submit jobs using the SLURM job executor with Singularity for software dependencies.

Note that you will need to specify your UPPMAX project ID when running a pipeline. To do this, use the command line flag --project <project_ID>. The pipeline will exit with an error message if you try to run it pipeline with the default UPPMAX config profile without a project.

Optional Extra: To avoid having to specify your project every time you run Nextflow, you can add it to your personal Nextflow config file instead. Add this line to ~/.nextflow/config:

params.project = 'project_ID' // eg. b2017123