nf-core/rnafusion
RNA-seq analysis pipeline for detection of gene-fusions
Pipeline summary
The pipeline is divided into two parts:
- Download and build references
- specified with
--references_only
parameter - required only once before running the pipeline
- Important: has to be run with each new release
- Detecting fusions
- Supported tools:
Arriba
,FusionCatcher
,STAR-Fusion
,StringTie
andCTAT-SPLICING
- QC:
Fastqc
,MultiQC
, andPicard CollectInsertSize
,Picard CollectWgsMetrics
,Picard Markduplicates
- Fusions visualization:
Arriba
,fusion-report
,FusionInspector
, andvcf_collect
Download and build references
The rnafusion pipeline needs references for the fusion detection tools, so downloading these is a requirement.
IMPORTANT
- Note that this step takes about 24 hours to complete on HPC.
- Do not provide a samplesheet via the
input
parameter, otherwise the pipeline will run the analysis directly after downloading the references (except if that is what you want).
References for each tools can also be downloaded separately with:
If you are not covered by the research COSMIC license and want to avoid using COSMIC, you can provide the additional option --no_cosmic
.
Downloading the cosmic database with SANGER or QUIAGEN
For academic users
First register for a free account at COSMIC at https://cancer.sanger.ac.uk/cosmic/register using a university email. The account is only activated upon clicking the link in the registration email.
For non-academic users
Use credentials from QIAGEN and add --qiagen
STAR-Fusion references downloaded vs built
By default STAR-Fusion references are built. You can also download them from CTAT by using the flag --starfusion_build FALSE
for both reference building and fusion detection. This allows more flexibility for different organisms but be aware that STAR-Fusion reference download is not recommended as not fully tested!
Issues with building references
If process FUSIONREPORT_DOWNLOAD
times out, it could be due to network restriction (for example if trying to run on HPC). As this process is lightweight in cpu, memory and time, running on local machines with the following options might solve the issue:
Adjustments for cpu and memory requirements can be done by feeding a custom configuration with -c /PATH/TO/CUSTOM/CONFIG
.
Where the custom configuration could look like (adaptation to local machine necessary):
The four fusion-report
files: cosmic.db
, fusiongdb2.db
, mitelman.db
should then be copied into the HPC <REFERENCE_PATH>/references/fusion_report_db
.
Note about fusioncatcher references
The references are only built based on ensembl version 102. It is not possible currently to use any other version/source.
Running the pipeline
Samplesheet input
You will need to create a samplesheet with information about the samples you would like to analyse before running the pipeline. The pipeline will detect whether a sample is single- or paired-end from the samplesheet - the fastq_2
column is empty for single-end. The samplesheet has to be a comma-separated file (.csv) but can have as many columns as you desire. There is a strict requirement for the first 4 columns to match those defined in the table below with the header row included.
A final samplesheet file consisting of both single- and paired-end data may look something like the one below. This is for 6 samples, where TREATMENT_REP3
has been sequenced twice.
As you can see above for multiple runs of the same sample, the sample
name has to be the same when you have re-sequenced the same sample more than once e.g. to increase sequencing depth. The pipeline will concatenate the raw reads before performing any downstream analysis.
Column | Description |
---|---|
sample | Custom sample name. This entry will be identical for multiple sequencing libraries/runs from the same sample. Spaces in sample names are automatically converted to underscores (_ ). |
fastq_1 | Full path to FastQ file for Illumina short reads 1. File has to be gzipped and have the extension “.fastq.gz” or “.fq.gz”. |
fastq_2 | Full path to FastQ file for Illumina short reads 2. File has to be gzipped and have the extension “.fastq.gz” or “.fq.gz”. |
strandedness | Strandedness: forward or reverse. |
Starting commands
The pipeline can either be run using all fusion detection tools or specifying individual tools. Visualisation tools will be run on all fusions detected. To run all tools (arriba
, fusioncatcher
, starfusion
, stringtie
, ctat-splicing
) use the --all
parameter:
To run only a specific detection tool use: --tool
:
If you are not covered by the research COSMIC license and want to avoid using COSMIC, you can provide the additional option --no_cosmic
.
IMPORTANT: Either
--all
or--<tool>
is necessary to run detection tools
--genomes_base
should be the path to the directory containing the folder references/
that was built with --references_only
.
Note that the pipeline will create the following files in your working directory:
If you wish to repeatedly use the same parameters for multiple runs, rather than specifying each flag in the command, you can specify these in a params file.
Pipeline settings can be provided in a yaml
or json
file via -params-file <file>
.
Do not use -c <file>
to specify parameters as this will result in errors. Custom config files specified with -c
must only be used for tuning process resource specifications, other infrastructural tweaks (such as output directories), or module arguments (args).
The above pipeline run specified with a params file in yaml format:
with:
You can also generate such YAML
/JSON
files via nf-core/launch.
Conda is not currently supported. Supported genome is currently only GRCh38.
Options
Trimming
When the flag --fastp_trim
is used, fastp
is used to provide all tools with trimmed reads. Quality and adapter trimming by default. In addition, tail trimming and adapter_fastq specification are possible. Example usage:
Filter fusions detected by 2 or more tools
--tools_cutoff INT
will discard fusions detected by less than INT tools both for display in fusionreport html index and to consider in fusioninspector. Default = 1, no filtering.
Adding custom fusions to consider as well as the detected set: whitelist
The custom fusion file should have the following format:
Running FusionInspector only
FusionInspector can be run as a standalone with:
The custom fusion file should have the following format:
Skipping QC
This will skip all QC-related processes (picard metrics collection)
Skipping visualisation
This will skip all visualisation processes, including fusion-report
, FusionInspector
and Arriba
visualisation.
Optional manual feed-in of fusion files
It is possible to give the output of each tool manually using the argument: --<tool>_fusions PATH/TO/FUSION/FILE
: this feature need more testing, don’t hesitate to open an issue if you encounter problems.
Set different --limitSjdbInsertNsj
parameter
There are two parameters to increase the --limitSjdbInsertNsj
parameter if necessary:
--fusioncatcher_limitSjdbInsertNsj
, default: 2000000--fusioninspector_limitSjdbInsertNsj
, default: 1000000
Use the parameter --cram
to compress the BAM files to CRAM for specific tools. Options: arriba, starfusion. Leave no space between options:
--cram arriba,starfusion
, default: []--cram arriba
Troubleshooting
GstrandBit issues
The issue below sometimes occurs:
As the error message suggests, it is a STAR-related error and your best luck in solving it will be the forum.
Updating the pipeline
When you run the above command, Nextflow automatically pulls the pipeline code from GitHub and stores it as a cached version. When running the pipeline after this, it will always use the cached version if available - even if the pipeline has been updated since. To make sure that you’re running the latest version of the pipeline, make sure that you regularly update the cached version of the pipeline:
Reproducibility
It is a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you’ll be running the same version of the pipeline, even if there have been changes to the code since.
First, go to the nf-core/rnafusion releases page and find the latest pipeline version - numeric only (eg. 1.3.1
). Then specify this when running the pipeline with -r
(one hyphen) - eg. -r 1.3.1
. Of course, you can switch to another version by changing the number after the -r
flag.
This version number will be logged in reports when you run the pipeline, so that you’ll know what you used when you look back in the future. For example, at the bottom of the MultiQC reports.
To further assist in reproducbility, you can use share and re-use parameter files to repeat pipeline runs with the same settings without having to write out a command with every single parameter.
If you wish to share such profile (such as upload as supplementary material for academic publications), make sure to NOT include cluster specific paths to files, nor institutional specific profiles.
Core Nextflow arguments
These options are part of Nextflow and use a single hyphen (pipeline parameters use a double-hyphen).
-profile
Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments.
Several generic profiles are bundled with the pipeline which instruct the pipeline to use software packaged using different methods (Docker, Singularity, Podman, Shifter, Charliecloud, Apptainer, Conda) - see below.
We highly recommend the use of Docker or Singularity containers for full pipeline reproducibility, however when this is not possible, Conda is also supported.
The pipeline also dynamically loads configurations from https://github.com/nf-core/configs when it runs, making multiple config profiles for various institutional clusters available at run time. For more information and to see if your system is available in these configs please see the nf-core/configs documentation.
Note that multiple profiles can be loaded, for example: -profile test,docker
- the order of arguments is important!
They are loaded in sequence, so later profiles can overwrite earlier profiles.
If -profile
is not specified, the pipeline will run locally and expect all software to be installed and available on the PATH
. This is not recommended, since it can lead to different results on different machines dependent on the computer enviroment.
test
- A profile with a complete configuration for automated testing
- Includes links to test data so needs no other parameters
docker
- A generic configuration profile to be used with Docker
singularity
- A generic configuration profile to be used with Singularity
podman
- A generic configuration profile to be used with Podman
shifter
- A generic configuration profile to be used with Shifter
charliecloud
- A generic configuration profile to be used with Charliecloud
apptainer
- A generic configuration profile to be used with Apptainer
wave
- A generic configuration profile to enable Wave containers. Use together with one of the above (requires Nextflow
24.03.0-edge
or later).
- A generic configuration profile to enable Wave containers. Use together with one of the above (requires Nextflow
conda
- A generic configuration profile to be used with Conda. Please only use Conda as a last resort i.e. when it’s not possible to run the pipeline with Docker, Singularity, Podman, Shifter, Charliecloud, or Apptainer.
test
- A profile with a complete configuration for automated testing
- Includes links to test data so needs no other parameters
- !!!! Run with
-stub
as all references need to be downloaded otherwise !!!!
-resume
Specify this when restarting a pipeline. Nextflow will use cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously. For input to be considered the same, not only the names must be identical but the files’ contents as well. For more info about this parameter, see this blog post.
You can also supply a run name to resume a specific run: -resume [run-name]
. Use the nextflow log
command to show previous run names.
-c
Specify the path to a specific config file (this is a core Nextflow command). See the nf-core website documentation for more information.
Custom configuration
Resource requests
Whilst the default requirements set within the pipeline will hopefully work for most people and with most input data, you may find that you want to customise the compute resources that the pipeline requests. Each step in the pipeline has a default set of requirements for number of CPUs, memory and time. For most of the steps in the pipeline, if the job exits with any of the error codes specified here it will automatically be resubmitted with higher requests (2 x original, then 3 x original). If it still fails after the third attempt then the pipeline execution is stopped.
To change the resource requests, please see the max resources and tuning workflow resources section of the nf-core website.
Custom Containers
In some cases you may wish to change which container or conda environment a step of the pipeline uses for a particular tool. By default nf-core pipelines use containers and software from the biocontainers or bioconda projects. However in some cases the pipeline specified version maybe out of date.
To use a different container from the default container or conda environment specified in a pipeline, please see the updating tool versions section of the nf-core website.
Custom Tool Arguments
A pipeline might not always support every possible argument or option of a particular tool used in pipeline. Fortunately, nf-core pipelines provide some freedom to users to insert additional parameters that the pipeline does not include by default.
To learn how to provide additional arguments to a particular tool of the pipeline, please see the customising tool arguments section of the nf-core website.
nf-core/configs
In most cases, you will only need to create a custom config as a one-off but if you and others within your organisation are likely to be running nf-core pipelines regularly and need to use the same settings regularly it may be a good idea to request that your custom config file is uploaded to the nf-core/configs
git repository. Before you do this please can you test that the config file works with your pipeline of choice using the -c
parameter. You can then create a pull request to the nf-core/configs
repository with the addition of your config file, associated documentation file (see examples in nf-core/configs/docs
), and amending nfcore_custom.config
to include your custom profile.
See the main Nextflow documentation for more information about creating your own configuration files.
If you have any questions or issues please send us a message on Slack on the #configs
channel. —>
Running in the background
Nextflow handles job submissions and supervises the running jobs. The Nextflow process must run until the pipeline is finished.
The Nextflow -bg
flag launches Nextflow in the background, detached from your terminal so that the workflow does not stop if you log out of your session. The logs are saved to a file.
Alternatively, you can use screen
/ tmux
or similar tool to create a detached session which you can log back into at a later time.
Some HPC setups also allow you to run nextflow within a cluster job submitted your job scheduler (from where it submits more jobs).
Nextflow memory requirements
In some cases, the Nextflow Java virtual machines can start to request a large amount of memory.
We recommend adding the following line to your environment to limit this (typically in ~/.bashrc
or ~./bash_profile
):