nf-core is a community effort to collect a curated set of analysis pipelines built using Nextflow.
nf-core has three target audiences: facilities, single users and developers. For facilities it provides highly automated and optimized pipelines that guaranty reproducibility of results for their users. Single users profit from portable, documented and easy to use workflows. But you can also become a developer and write your own pipeline in Nextflow using already available templates and helper tools.
Nextflow is a workflow manager. It has been developed specifically to ease the creation and execution of bioinformatics pipelines. The benefits of having your pipeline in Nextflow include:
- Built-in GitHub support.
- Compatibility with virtually all computational infrastructures, including all major cluster job schedulers.
- Integrated software dependency management (Docker, Singularity, Conda).
- Portability so you can run your pipeline anywhere: laptop, cluster or cloud.
- Reproducibility of analyses independent of time and computing platform.
Whether your pipeline is a simple BLAST execution or a complex genome annotation pipeline, you can build it with Nextflow.
Nextflow works best when you have an active internet connection, as it is able to fetch all pipeline requirements. If you need to run offline, please see Running offline.
First, make sure that you have all required software installed (Nextflow + Docker / Singularity / Conda). See the installation docs for more information.
Try running the Nextflow "hello world" example to make sure that the tools are working properly:
nextflow run hello
Configure Nextflow to run on your system.
The simplest way to run is with
singularity) which will tell Nextflow to execute jobs locally using Docker to fulfil the software requirements.
Conda is also supported with
-profile conda. However this option is not recommended, as reproducibility of the results can't be guaranteed without containerization.
For more complex configuration of Nextflow for your system, please see the Nextflow configuration documentation.
To test that everything is working properly, try running the tests for your pipeline of interest in the terminal:
nextflow run nf-core/<pipeline_name> -profile test,docker
<pipeline_name>with the name of an nf-core pipeline.
If you don't have Docker installed, replace
dockerin the command with either
There is no need to download anything first - nextflow will pull the code for you from the GitHub repository automatically and fetch the software requirements too.
Read the pipeline documentation to see which command-line parameters are required. These will be specific to your data type and usage.
Launch the pipeline with some real data by omitting the
testconfig profile and providing the required pipeline-specific parameters. For example, if you want to run the
methylseqpipeline, you might use the following command:
nextflow run nf-core/methylseq -profile docker --reads 'input_data/*.fastq.gz' --outdir myproj/results --genome GRCh38
Once complete, check the pipeline execution and quality control reports. Each pipeline comes with documentation describing the different outputs.
- Hyphens matter! Core Nextflow command-line options use one (
-) whereas pipeline specific parameters use two (
--email firstname.lastname@example.org receive emails when your pipeline completes
- Always specify
-r <version-number>when running to explicitly use a specific release. Then an identical command can be used in the future to give identical results.
-resumeto restart pipelines that did not complete. This ensures that successful tasks from the previous run wont be re-executed.
nextflow logto find names of all previous runs in your directory. These can be used with
-resumeto restart specific runs.
- Be clever with multiple Nextflow configuration locations. For example, use
-profilefor your cluster configuration,
~/.nextflow/configfor your personal config such as
params.emailand a working directory
nextflow.configfile for reproducible run-specific configuration.
To help you manage your nf-core pipelines and discover updates, we have written some command-line helper tools. These allow you to list all available pipelines and versions, with information about what versions you're running locally. There are also commands to help downloading pipelines for use offline.
To find out more about these tools, read the Tools page.