Introduction

nf-core/asenext is a bioinformatics pipeline that performs allele-specific expression (ASE) analysis using:

  • STAR-WASP for allele-aware alignment
  • UMI-tools for molecular deduplication
  • Beagle for haplotype phasing
  • phaser for ASE quantification

The pipeline is designed for paired-end RNA-seq data with UMI barcodes and requires corresponding VCF files containing genetic variants for each sample.

Pipeline summary

The ASENext pipeline performs the following main steps:

  1. Quality Control (FastQC)
  2. VCF Preparation - Process VCF files for STAR and Beagle compatibility
  3. Alignment (STAR with WASP mode)
  4. WASP Filtering - Remove reads with allelic mapping bias
  5. UMI Deduplication (UMI-tools)
  6. BAM Processing (SAMtools)
  7. Variant Phasing (Beagle)
  8. ASE Analysis (phaser)
  9. Report Generation (MultiQC)

Quick start

  1. Install Nextflow (>=21.10.3)

  2. Install any of Docker, Singularity (you can follow this tutorial), Podman, Shifter or Charliecloud for full pipeline reproducibility (you can use Conda both to install Nextflow itself and also to manage software within pipelines. Please only use it within pipelines as a last resort; see docs).

  3. Download the pipeline and test it on a minimal dataset with a single command:

    nextflow run nf-core/asenext -profile test,docker --outdir <OUTDIR>

    Note that some form of configuration will be needed so that Nextflow knows how to fetch the required software. This is usually done in the form of a config profile (YOURPROFILE in the example command above). You can chain multiple config profiles in a comma-separated string.

    • The pipeline comes with config profiles called docker, singularity, podman, shifter, charliecloud and conda which instruct the pipeline to use the named tool for software management. For example, -profile test,docker.
    • Please check nf-core/configs to see if a custom config file to run nf-core pipelines on your institution’s infrastructure already exists before creating your own!
    • If you are using singularity, please use the nf-core download command to download images first, before running the pipeline. Set the NXF_SINGULARITY_CACHEDIR or singularity.cacheDir Nextflow options to be able to store and re-use the images from a central location for future pipeline runs.
    • If you are using conda, it is highly recommended to use the NXF_CONDA_CACHEDIR or conda.cacheDir settings to store the environments in a central location for future pipeline runs.
  4. Start running your own analysis!

    nextflow run nf-core/asenext --input samplesheet.csv --outdir <OUTDIR> --genome GRCh38 --chromosome chr11 -profile <docker/singularity/podman/shifter/charliecloud/conda/institute>

Pipeline parameters

Input/output options

Define where the pipeline should find input data and save output data.

ParameterDescriptionTypeDefaultRequiredHidden
inputPath to comma-separated file containing information about the samples in the experiment.string
outdirThe output directory where the results will be saved. You have to use absolute paths to storage on Cloud infrastructure.string
emailEmail address for completion summary.string
multiqc_titleMultiQC report title. Written as “title” in the MultiQC config file.string

Reference genome options

Reference genome related files and options required for the workflow.

ParameterDescriptionTypeDefaultRequiredHidden
genomeName of iGenomes reference.string
fastaPath to FASTA genome file.string
gtfPath to GTF annotation file.string
star_indexPath to directory containing STAR indices.string
gene_featuresPath to BED file with gene features for phaser_gene_ae.string
igenomes_baseDirectory / URL base for iGenomes references.strings3://ngi-igenomes/igenomes/
igenomes_ignoreDo not load the iGenomes reference config.boolean

Chromosome and phasing options

Options for chromosome selection and variant phasing.

ParameterDescriptionTypeDefaultRequiredHidden
chromosomeChromosome to analyze (e.g., ‘chr11’, ‘chr1’).stringchr11
beagle_refPath to Beagle reference panel VCF file for phasing.string
beagle_mapPath to Beagle genetic map file for phasing.string

UMI options

Options for UMI processing.

ParameterDescriptionTypeDefaultRequiredHidden
umi_separatorUMI separator character in read IDs.string:

Institutional config options

Parameters used to describe centralised config profiles. These should not be edited.

ParameterDescriptionTypeDefaultRequiredHidden
custom_config_versionGit commit id for Institutional configs.stringmaster
custom_config_baseBase directory for Institutional configs.stringhttps://raw.githubusercontent.com/nf-core/configs/master
config_profile_nameInstitutional config name.string
config_profile_descriptionInstitutional config description.string
config_profile_contactInstitutional config contact information.string
config_profile_urlInstitutional config URL link.string

Max job request options

Set the top limit for requested resources for any single job.

ParameterDescriptionTypeDefaultRequiredHidden
max_cpusMaximum number of CPUs that can be requested for any single job.integer16
max_memoryMaximum amount of memory that can be requested for any single job.string128.GB
max_timeMaximum amount of time that can be requested for any single job.string240.h

Generic options

Less common options for the pipeline, typically set in a config file.

ParameterDescriptionTypeDefaultRequiredHidden
helpDisplay help text.boolean
versionDisplay version and exit.boolean
publish_dir_modeMethod used to save pipeline results to output directory.stringcopy
email_on_failEmail address for completion summary, only when pipeline fails.string
plaintext_emailSend plain-text email instead of HTML.boolean
max_multiqc_email_sizeFile size limit when attaching MultiQC reports to summary emails.string25.MB
monochrome_logsDo not use coloured log outputs.boolean
hook_urlIncoming hook URL for messaging servicestring
multiqc_configCustom config file to supply to MultiQC.string
multiqc_logoCustom logo file to supply to MultiQC. File name must be the same when using MultiQC.string
tracedirDirectory to keep pipeline Nextflow logs and reports.string${params.outdir}/pipeline_info
validate_paramsBoolean whether to validate parameters against the schema at runtimebooleantrue
show_hidden_paramsShow all params when using --helpboolean

Samplesheet format

You will need to create a samplesheet with information about the samples you would like to analyse before running the pipeline. Use this parameter to specify its location. It has to be a comma-separated file with 4 columns, and a header row as shown in the examples below.

--input '[path to samplesheet file]'

Full samplesheet

The pipeline will auto-detect whether a sample is single- or paired-end using the information provided in the samplesheet. The samplesheet can have as many columns as you desire, however, there is a strict requirement for the first 4 columns to match those defined in the table below.

A final samplesheet file consisting of paired-end reads may look something like the one below. This is for 3 samples, where SAMPLE3 has been sequenced twice.

sample,fastq_1,fastq_2,vcf
SAMPLE1,AEG588A1_S1_L002_R1_001.fastq.gz,AEG588A1_S1_L002_R2_001.fastq.gz,SAMPLE1.vcf.gz
SAMPLE2,AEG588A2_S2_L002_R1_001.fastq.gz,AEG588A2_S2_L002_R2_001.fastq.gz,SAMPLE2.vcf.gz
SAMPLE3,AEG588A3_S3_L002_R1_001.fastq.gz,AEG588A3_S3_L002_R2_001.fastq.gz,SAMPLE3.vcf.gz
ColumnDescription
sampleCustom sample name. This entry will be identical for multiple sequencing libraries/runs from the same sample. Spaces in sample names are automatically converted to underscores (_).
fastq_1Full path to FastQ file for Illumina short reads 1. File has to be gzipped and have the extension “.fastq.gz” or “.fq.gz”.
fastq_2Full path to FastQ file for Illumina short reads 2. File has to be gzipped and have the extension “.fastq.gz” or “.fq.gz”.
vcfFull path to VCF file containing genetic variants for the sample. File can be gzipped (“.vcf.gz”) or uncompressed (“.vcf”).

An example samplesheet has been provided with the pipeline.

Reference files

The ASENext pipeline requires several reference files to run successfully:

Required files

  1. Reference genome FASTA (--fasta): The reference genome sequence
  2. GTF annotation (--gtf): Gene annotation file
  3. STAR index (--star_index): Pre-built STAR genome index
  4. Gene features BED (--gene_features): BED file with gene coordinates for ASE analysis

Optional files for phasing

  1. Beagle reference panel (--beagle_ref): Population reference for improved phasing
  2. Beagle genetic map (--beagle_map): Recombination map for phasing

Using iGenomes

The pipeline is compatible with reference files from AWS iGenomes. You can use the --genome parameter to automatically configure reference files:

nextflow run nf-core/asenext --input samplesheet.csv --genome GRCh38 --outdir results

Supported genomes include:

  • GRCh38 - Human (Homo sapiens)
  • GRCh37 - Human (Homo sapiens)
  • GRCm38 - Mouse (Mus musculus)

Preparing custom reference files

STAR index generation

If you need to create a STAR index:

STAR --runMode genomeGenerate \
     --genomeDir /path/to/star_index \
     --genomeFastaFiles genome.fa \
     --sjdbGTFfile genes.gtf \
     --runThreadN 8

Gene features BED file

The gene features BED file should contain gene coordinates for ASE analysis. It can be generated from your GTF file:

# Extract gene coordinates from GTF
awk '$3=="gene" {print $1"\t"$4-1"\t"$5"\t"$10"\t"$6"\t"$7}' genes.gtf | \
    sed 's/";//g' | sed 's/"//g' > gene_features.bed

Running the pipeline

The typical command for running the pipeline is as follows:

nextflow run nf-core/asenext --input ./samplesheet.csv --outdir ./results --genome GRCh38 --chromosome chr11 -profile docker

This will launch the pipeline with the docker configuration profile. See below for more information about profiles.

Note that the pipeline will create the following files in your working directory:

work                # Directory containing the nextflow working files
<OUTDIR>           # Finished results in specified location (defined with --outdir)
.nextflow_log      # Log file from Nextflow
# Other nextflow hidden files, eg. history of pipeline runs and old logs.

If you wish to repeatedly use the same parameters for multiple runs, rather than specifying each flag in the command, you can specify these in a params file.

Pipeline settings can be provided in a yaml or json file via -params-file <file>.

⚠️ Do not use -c <file> to specify parameters as this will result in errors. Custom config files specified with -c must only be used for tuning process resource specifications, other infrastructural tweaks (such as output directories), or module arguments (args).

The above pipeline run specified with a params file in yaml format:

nextflow run nf-core/asenext -params-file params.yaml

with params.yaml containing:

input: './samplesheet.csv'
outdir: './results/'
genome: 'GRCh38'
chromosome: 'chr11'
<...>

You can also generate such YAML/JSON files via nf-core/launch.

Updating the pipeline

When you run the above command, Nextflow automatically pulls the pipeline code from GitHub and stores it as a cached version. When running the pipeline after this, it will always use the cached version if available - even if the pipeline has been updated since. To make sure that you’re running the latest version of the pipeline, make sure that you regularly update the cached version of the pipeline:

nextflow pull nf-core/asenext

Reproducibility

It is a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you’ll be running the same version of the pipeline, even if there have been changes to the code since.

First, go to the nf-core/asenext releases page and find the latest pipeline version (numeric only, no v). Then specify this when running the pipeline with -r (one hyphen) - e.g. -r 1.3.1. Of course, you can switch to another version by changing the number after the -r flag.

This version number will be logged in reports when you run the pipeline, so that you’ll know what you used when you look back in the future. For example, at the bottom of the MultiQC reports.

To further assist in reproducibility, you can use share and re-use parameter files to repeat pipeline runs with the same settings without having to write out a command with every single parameter.

💡 If you wish to share such profile (such as upload as supplementary material for academic publications), make sure to NOT include cluster specific paths to files, nor institutional specific profiles.

Core Nextflow arguments

NB: These options are part of Nextflow and use a single hyphen (pipeline parameters use a double-hyphen).

-profile

Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments.

Several generic profiles are bundled with the pipeline which instruct the pipeline to use software packaged using different methods (Docker, Singularity, Podman, Shifter, Charliecloud, Conda) - see below.

We highly recommend the use of Docker or Singularity containers for full pipeline reproducibility, however when this is not possible, Conda is also supported.

The pipeline also dynamically loads configurations from https://github.com/nf-core/configs when it runs, making multiple config profiles for various institutional clusters available at run time. For more information and to see if your system is available in these configs please see the nf-core/configs documentation.

Note that multiple profiles can be loaded, for example: -profile test,docker - the order of arguments is important! They are loaded in sequence, so later profiles can overwrite earlier profiles.

If -profile is not specified, the pipeline will run locally and expect all software to be installed and available on the PATH. This is not recommended, since it can lead to different results on different machines dependent on the computer enviroment.

  • test
    • A profile with a complete configuration for automated testing
    • Includes links to test data so needs no other parameters
  • docker
    • A generic configuration profile to be used with Docker
  • singularity
    • A generic configuration profile to be used with Singularity
  • podman
    • A generic configuration profile to be used with Podman
  • shifter
    • A generic configuration profile to be used with Shifter
  • charliecloud
    • A generic configuration profile to be used with Charliecloud
  • conda
    • A generic configuration profile to be used with Conda. Please only use Conda as a last resort i.e. when it’s not possible to run the pipeline with Docker, Singularity, Podman, Shifter or Charliecloud.

Resuming a workflow

A pipeline might be restarted at any time with the -resume flag to continue exactly where it left off. This requires the Nextflow cache to be intact.

You can also supply a run name to resume a specific run: -resume [run-name]. Use the nextflow log command to show previous run names.

Custom configuration

Resource requests

Whilst the default requirements set within the pipeline will hopefully work for most people and with most input data, you may find that you want to customise the compute resources that the pipeline requests. Each step in the pipeline has a default set of requirements for number of CPUs, memory and time. For most of the steps in the pipeline, if the job exits with any of the error codes specified here it will automatically be resubmitted with higher requests (2 x original, then 3 x original). If it still fails after the third attempt then the pipeline execution is stopped.

To change the resource requests, please see the max resources and tuning workflow resources section of the nf-core website.

Custom Tool Arguments

A pipeline might not always support every possible argument or option of a particular tool used in the pipeline. Fortunately, nf-core pipelines provide some freedom to users to insert additional parameters that the pipeline does not include by default.

To learn how to provide additional arguments to a particular tool of the pipeline, please see the customising tool arguments section of the nf-core website.

Custom Containers

In some cases you may wish to change which container or conda environment a step of the pipeline uses for a particular tool. By default nf-core pipelines use containers and software from the biocontainers or bioconda projects. However in some cases the pipeline specified version maybe out of date.

To use a different container from the default container or conda environment specified in a pipeline, please see the updating tool versions section of the nf-core website.

nf-core/configs

In most cases, you will only need to create a custom config as a one-off but if you and others within your organisation are likely to be running nf-core pipelines regularly and need similar settings then you can request that your custom config file is uploaded to the nf-core/configs git repository. Before you do this please can you test that the config file works with your pipeline of choice using the -c parameter. You can then create a pull request to the nf-core/configs repository with the addition of your config file, associated documentation file (see examples in nf-core/configs/docs), and amending nfcore_custom.config to include your custom profile.

See the main Nextflow documentation for more information about creating your own configuration files.

If you have any questions or issues please send us a message on Slack on the #configs channel.

Azure Resource Requests

To be used with the azurebatch profile by specifying the -profile azurebatch. We recommend providing a compute environment name and queue name as environment variables.

See the list of Azure instance types and their properties.

The pipeline will auto-detect the following environment variables to determine the appropriate compute environment and queue name:

  • AZURE_COMPUTE_ENV - Batch compute environment name.
  • AZURE_QUEUE - Batch queue name.

Running in the background

Nextflow handles job submissions and supervises the running jobs. The Nextflow process must run until the pipeline is finished.

The Nextflow -bg flag launches Nextflow in the background, detached from your terminal so that the workflow does not stop if you log out of your session. The logs are saved to a file.

Alternatively, you can use screen / tmux or similar tool to create a detached session which you can log back into at a later time. Some HPC setups also allow you to run nextflow within a cluster job submitted your job scheduler (from where it submits more jobs).

Nextflow memory requirements

In some cases, the Nextflow Java virtual machines can start to request a large amount of memory. We recommend adding the following line to your environment to limit this (typically in ~/.bashrc or ~./bash_profile):

NXF_OPTS='-Xms1g -Xmx4g'