nf-core/fetchngs
Pipeline to fetch metadata and raw FastQ files from public databases
1.10.1
). The latest
stable release is
1.12.0
.
Introduction
The pipeline has been set-up to automatically download and process the raw FastQ files from both public and private repositories. Identifiers can be provided in a file, one-per-line via the --input
parameter. Currently, the following types of example identifiers are supported:
SRA | ENA | DDBJ | GEO | Synapse |
---|---|---|---|---|
SRR11605097 | ERR4007730 | DRR171822 | GSM4432381 | syn26240435 |
SRX8171613 | ERX4009132 | DRX162434 | GSE147507 | |
SRS6531847 | ERS4399630 | DRS090921 | ||
SAMN14689442 | SAMEA6638373 | SAMD00114846 | ||
SRP256957 | ERP120836 | DRP004793 | ||
SRA1068758 | ERA2420837 | DRA008156 | ||
PRJNA625551 | PRJEB37513 | PRJDB4176 |
SRR / ERR / DRR ids
If SRR
/ERR
/DRR
run ids are provided then these will be resolved back to their appropriate SRX
/ERX
/DRX
ids to be able to merge multiple runs from the same experiment. This is conceptually the same as merging multiple libraries sequenced from the same sample.
The final sample information for all identifiers is obtained from the ENA which provides direct download links for FastQ files as well as their associated md5 sums. If download links exist, the files will be downloaded in parallel by FTP. Otherwise they are downloaded using sra-tools.
All of the sample metadata obtained from the ENA will be appended as additional columns to help you manually curate the generated samplesheet before you run the pipeline. You can customise the metadata fields that are appended to the samplesheet via the --ena_metadata_fields
parameter. The default list of fields used by the pipeline can be found at the top of the bin/sra_ids_to_runinfo.py
script within the pipeline repo. However, this pipeline requires a minimal set of fields to download FastQ files i.e. 'run_accession,experiment_accession,library_layout,fastq_ftp,fastq_md5'
. A comprehensive list of accepted metadata fields can be obtained from the ENA API.
If you have a GEO accession (found in the data availability section of published papers) you can directly download a text file containing the appropriate SRA ids to pass to the pipeline:
- Search for your GEO accession on GEO
- Click
SRA Run Selector
at the bottom of the GEO accession page - Select the desired samples in the
SRA Run Selector
and then download theAccession List
This downloads a text file called SRR_Acc_List.txt
that can be directly provided to the pipeline once renamed with a .csv extension e.g. --input SRR_Acc_List.csv
.
Synapse ids
Synapse is a collaborative research platform created by Sage Bionetworks. Its aim is to promote reproducible research and responsible data sharing throughout the biomedical community. To download data from Synapse
, the Synapse id of the directory containing all files to be downloaded should be provided. The Synapse id should be an eleven-characters beginning with syn
.
This Synapse id will then be resolved to the Synapse id of the corresponding FastQ files contained within the directory. The individual FastQ files are then downloaded in parellel using the synapse get
command. All Synapse metadata, annotations and data provenance are also downloaded using the synapse show
command, and are outputted to a separate metadata file. By default, only the md5sums, file sizes, etags, Synapse ids, file names, and file versions are shown.
In order to download data from Synapse, an account must be created and a user configuration file provided via the parameter --synapse_config
. For more information about Synapse configuration, please see the Synapse client configuration documentation.
The final sample information for the FastQ files used for samplesheet generation is obtained from the file name itself. The file names are parsed according to the glob pattern *{1,2}*
, which returns the sample name, presumed to be the longest possible string matching the glob pattern, with the fewest number of wildcard insertions.
Supported File Names
- Files named
SRR493366_1.fastq
andSRR493366_2.fastq
will have a sample name ofSRR493366
- Files named
SRR_493_367_1.fastq
andSRR_493_367_2.fastq
will have a sample name ofSRR_493_367
- Files named
filename12_1.fastq
andfilename12_2.fastq
will have a sample name offilename12
Samplesheet format
As a bonus, the columns in the auto-created samplesheet can be tailored to be accepted out-of-the-box by selected nf-core pipelines, these currently include:
- nf-core/rnaseq
- nf-core/atacseq
- Ilumina processing mode of nf-core/viralrecon
- nf-core/taxprofiler
You can use the --nf_core_pipeline
parameter to customise this behaviour e.g. --nf_core_pipeline rnaseq
. More pipelines will be supported in due course as we adopt and standardise samplesheet input across nf-core. It is highly recommended that you double-check that all of the identifiers required by the downstream nf-core pipeline are accurately represented in the samplesheet. For example, the nf-core/atacseq pipeline requires a replicate
column to be provided in it’s input samplehsheet, however, public databases don’t reliably hold information regarding replicates so you may need to amend these entries if your samplesheet was created by providing --nf_core_pipeline atacseq
.
From v1.9 of this pipeline the default strandedness
in the output samplesheet will be set to auto
when using --nf_core_pipeline rnaseq
. This will only work with v3.10 onwards of nf-core/rnaseq which permits the auto-detection of strandedness during the pipeline execution. You can change this behaviour with the --nf_core_rnaseq_strandedness
parameter which is set to auto
by default.
Bypass FTP
data download
If FTP connections are blocked on your network use the --force_sratools_download
parameter to force the pipeline to download data using sra-tools instead of the ENA FTP.
Downloading dbGAP data with JWT
As of v1.10.0, the SRA Toolkit used in this pipeline can be configured to access protected data from dbGAP using a JWT cart file on a supported cloud computing environment (Amazon Web Services or Google Cloud Platform). The JWT cart file can be specified with --dbgap_key /path/to/cart.jwt
.
Note that due to the way the pipeline resolves SRA IDs down to the experiment to be able to merge multiple runs, your JWT cart file must be generated for all runs in an experiment. Otherwise, upon running prefetch
and fasterq-dump
, the pipeline will return a 403 Error
when trying to download data for other runs under an experiment that are not authenticated for with the provided JWT cart file.
Users can log into the SRA Run Selector, search for the dbGAP study they have been granted access to using the phs identifier, and select all available runs to activate the JWT Cart
button to download the file.
To test this functionality in your cloud computing environment, you can use the protected dbGAP cloud testing study with experiment accession SRX512039
:
- On the SRA Run Selector page for
SRX512039
, select the two available runs (SRR1219865
andSRR1219902
) and click onJWT Cart
to download a key file calledcart.jwt
that can be directly provided to the pipeline with--dbgap_key cart.jwt
- Click on
Accession List
to download a text file calledSRR_Acc_List.txt
with the SRR IDs that can be directly provided to the pipeline with--input SRR_Acc_List.txt
Running the pipeline
The typical command for running the pipeline is as follows:
This will launch the pipeline with the docker
configuration profile. See below for more information about profiles.
Note that the pipeline will create the following files in your working directory:
If you wish to repeatedly use the same parameters for multiple runs, rather than specifying each flag in the command, you can specify these in a params file.
Pipeline settings can be provided in a yaml
or json
file via -params-file <file>
.
Do not use -c <file>
to specify parameters as this will result in errors. Custom config files specified with -c
must only be used for tuning process resource specifications, other infrastructural tweaks (such as output directories), or module arguments (args).
The above pipeline run specified with a params file in yaml format:
with params.yaml
containing:
You can also generate such YAML
/JSON
files via nf-core/launch.
Updating the pipeline
When you run the above command, Nextflow automatically pulls the pipeline code from GitHub and stores it as a cached version. When running the pipeline after this, it will always use the cached version if available - even if the pipeline has been updated since. To make sure that you’re running the latest version of the pipeline, make sure that you regularly update the cached version of the pipeline:
Reproducibility
It is a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you’ll be running the same version of the pipeline, even if there have been changes to the code since.
First, go to the nf-core/fetchngs releases page and find the latest pipeline version - numeric only (eg. 1.3.1
). Then specify this when running the pipeline with -r
(one hyphen) - eg. -r 1.3.1
. Of course, you can switch to another version by changing the number after the -r
flag.
This version number will be logged in reports when you run the pipeline, so that you’ll know what you used when you look back in the future. For example, at the bottom of the MultiQC reports.
To further assist in reproducbility, you can use share and re-use parameter files to repeat pipeline runs with the same settings without having to write out a command with every single parameter.
If you wish to share such profile (such as upload as supplementary material for academic publications), make sure to NOT include cluster specific paths to files, nor institutional specific profiles.
Core Nextflow arguments
These options are part of Nextflow and use a single hyphen (pipeline parameters use a double-hyphen).
-profile
Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments.
Several generic profiles are bundled with the pipeline which instruct the pipeline to use software packaged using different methods (Docker, Singularity, Podman, Shifter, Charliecloud, Apptainer, Conda) - see below.
We highly recommend the use of Docker or Singularity containers for full pipeline reproducibility, however when this is not possible, Conda is also supported.
The pipeline also dynamically loads configurations from https://github.com/nf-core/configs when it runs, making multiple config profiles for various institutional clusters available at run time. For more information and to see if your system is available in these configs please see the nf-core/configs documentation.
Note that multiple profiles can be loaded, for example: -profile test,docker
- the order of arguments is important!
They are loaded in sequence, so later profiles can overwrite earlier profiles.
If -profile
is not specified, the pipeline will run locally and expect all software to be installed and available on the PATH
. This is not recommended, since it can lead to different results on different machines dependent on the computer enviroment.
test
- A profile with a complete configuration for automated testing
- Includes links to test data so needs no other parameters
docker
- A generic configuration profile to be used with Docker
singularity
- A generic configuration profile to be used with Singularity
podman
- A generic configuration profile to be used with Podman
shifter
- A generic configuration profile to be used with Shifter
charliecloud
- A generic configuration profile to be used with Charliecloud
apptainer
- A generic configuration profile to be used with Apptainer
conda
- A generic configuration profile to be used with Conda. Please only use Conda as a last resort i.e. when it’s not possible to run the pipeline with Docker, Singularity, Podman, Shifter, Charliecloud, or Apptainer.
-resume
Specify this when restarting a pipeline. Nextflow will use cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously. For input to be considered the same, not only the names must be identical but the files’ contents as well. For more info about this parameter, see this blog post.
You can also supply a run name to resume a specific run: -resume [run-name]
. Use the nextflow log
command to show previous run names.
-c
Specify the path to a specific config file (this is a core Nextflow command). See the nf-core website documentation for more information.
Custom configuration
Resource requests
Whilst the default requirements set within the pipeline will hopefully work for most people and with most input data, you may find that you want to customise the compute resources that the pipeline requests. Each step in the pipeline has a default set of requirements for number of CPUs, memory and time. For most of the steps in the pipeline, if the job exits with any of the error codes specified here it will automatically be resubmitted with higher requests (2 x original, then 3 x original). If it still fails after the third attempt then the pipeline execution is stopped.
To change the resource requests, please see the max resources and tuning workflow resources section of the nf-core website.
Custom Containers
In some cases you may wish to change which container or conda environment a step of the pipeline uses for a particular tool. By default nf-core pipelines use containers and software from the biocontainers or bioconda projects. However in some cases the pipeline specified version may be out of date.
To use a different container from the default container or conda environment specified in a pipeline, please see the updating tool versions section of the nf-core website.
Custom Tool Arguments
A pipeline might not always support every possible argument or option of a particular tool used in pipeline. Fortunately, nf-core pipelines provide some freedom to users to insert additional parameters that the pipeline does not include by default.
To learn how to provide additional arguments to a particular tool of the pipeline, please see the customising tool arguments section of the nf-core website.
nf-core/configs
In most cases, you will only need to create a custom config as a one-off but if you and others within your organisation are likely to be running nf-core pipelines regularly and need to use the same settings regularly it may be a good idea to request that your custom config file is uploaded to the nf-core/configs
git repository. Before you do this please can you test that the config file works with your pipeline of choice using the -c
parameter. You can then create a pull request to the nf-core/configs
repository with the addition of your config file, associated documentation file (see examples in nf-core/configs/docs
), and amending nfcore_custom.config
to include your custom profile.
See the main Nextflow documentation for more information about creating your own configuration files.
If you have any questions or issues please send us a message on Slack on the #configs
channel.
Azure Resource Requests
To be used with the azurebatch
profile by specifying the -profile azurebatch
.
We recommend providing a compute params.vm_type
of Standard_D16_v3
VMs by default but these options can be changed if required.
Note that the choice of VM size depends on your quota and the overall workload during the analysis. For a thorough list, please refer the Azure Sizes for virtual machines in Azure.
Running in the background
Nextflow handles job submissions and supervises the running jobs. The Nextflow process must run until the pipeline is finished.
The Nextflow -bg
flag launches Nextflow in the background, detached from your terminal so that the workflow does not stop if you log out of your session. The logs are saved to a file.
Alternatively, you can use screen
/ tmux
or similar tool to create a detached session which you can log back into at a later time.
Some HPC setups also allow you to run nextflow within a cluster job submitted your job scheduler (from where it submits more jobs).
Nextflow memory requirements
In some cases, the Nextflow Java virtual machines can start to request a large amount of memory.
We recommend adding the following line to your environment to limit this (typically in ~/.bashrc
or ~./bash_profile
):