nf-core/pixelator
Pipeline to generate Molecular Pixelation data with Pixelator (Pixelgen Technologies AB)
1.0.3
). The latest
stable release is
1.3.1
.
Introduction
This document describes the output produced by the pipeline. The directories listed below will be created in the results directory after the pipeline has finished. All paths are relative to the top-level results directory.
Pipeline overview
The pipeline is built using Nextflow and processes data using multiple subcommands
of pixelator
.
The pipeline consists of the following steps:
- Preprocessing
- Quality control
- Demultiplexing
- Duplicate removal and error correction
- Compute connected components
- Filtering, annotation, cell-calling
- Downstream analysis
- Generate reports
Preprocessing
Output files
-
pixelator
-
amplicon
<sample-id>.merged.fastq.gz
: Combine R1 and R2 reads into full amplicon reads and calculate Q30 scores for the amplicon regions.<sample-id>.report.json
: Q30 metrics of the amplicon.<sample-id>.meta.json
: Command invocation metadata.
-
logs
<sample-id>.pixelator-amplicon.log
: pixelator log output.
-
The preprocessing step uses pixelator single-cell amplicon
to create full-length amplicon sequences from both single-end and paired-end data.
It returns a single fastq file per sample containing fixed length amplicons.
This step will also calculate Q30 quality scores for different regions of the library.
Quality control
Output files
-
pixelator
-
preqc
<sample-id>.processed.fastq.gz
: Processed reads.<sample-id>.failed.fastq.gz
: Discarded reads.<sample-id>.report.json
: Fastp json report.<sample-id>.meta.json
: Command invocation metadata.
-
adapterqc
<sample-id>.processed.fastq.gz
: Processed reads.<sample-id>.failed.fastq.gz
: Discarded reads.<sample-id>.report.json
: Cutadapt json report.<sample-id>.meta.json
: Command invocation metadata.
-
logs
<sample-id>.pixelator-preqc.log
: pixelator log output.
-
Quality control is performed using pixelator single-cell preqc
and pixelator single-cell adapterqc
.
The preqc stage performs QC and quality filtering of the raw sequencing data.
It also generates a QC report in HTML and JSON formats. It saves processed reads as well as reads that were
discarded (i.e. were too short, had too many Ns, or too low quality, etc.). Internally preqc
uses Fastp, and adapterqc
uses Cutadapt.
The adapterqc
stage checks for the presence and correctness of the pixel binding sequences. It also generates a QC report in JSON format. It saves processed reads as well as discarded reads (i.e. reads that did not have a match for both pixel binding sequences).
Demultiplexing
Output files
-
pixelator
-
demux
<sample-id>.processed-<antibody_name>.fastq.gz
: Reads demultiplexed per antibody.<sample-id>.failed.fastq.gz
: Discarded reads that do not match an antibody barcode.<sample-id>.report.json
: Cutadapt json report.<sample-id>.meta.json
: Command invocation metadata.
-
logs
<sample-id>.pixelator-demultiplex.log
: pixelator log output.
-
The pixelator single-cell demux
command assigns a marker (barcode) to each read. It also generates QC report in
JSON format. It saves processed reads (one per antibody) as well as discarded reads with no match to the
given barcodes/antibodies.
Duplicate removal and error correction
Output files
-
pixelator
-
collapse
<sample-id>.collapsed.csv.gz
: Edgelist of the graph.<sample-id>.report.json
: Statistics for the collapse step.<sample-id>.meta.json
: Command invocation metadata.
-
logs
<sample-id>.pixelator-collapse.log
: pixelator log output.
-
This step uses the pixelator single-cell collapse
command.
The collapse
command removes duplicate reads and performs error correction.
This is achieved using the unique pixel identifier and unique molecular identifier sequences to check for
uniqueness, collapse and compute a read count. The command generates a QC report in JSON format.
Errors are allowed when collapsing reads if --algorithm
is set to adjacency
(this is the default option).
The output format of this command is an edge list in CSV format.
Compute connected components
Output files
-
pixelator
-
graph
<sample-id>.edgelist.csv.gz
: Edge list dataframe (CSV) after recovering technical multiplets.<sample-id>.raw_edgelist.csv.gz
: Raw edge list dataframe in csv format before recovering technical multiplets.<sample-id>.components_recovered.csv
: List of new components recovered (when using--multiple-recovery
)<sample-id>.meta.json
: Command invocation metadata.<sample-id>.report.json
: Metrics with useful information about the clustering.*.meta.json
: Command invocation metadata.
-
logs
<sample-id>.pixelator-cluster.log
: pixelator log output.
-
This step uses the pixelator single-cell graph
command.
The input is the edge list dataframe (CSV) generated in the collapse step and after filtering it
by count (--graph_min_count
), the connected components of the graph (graphs) are computed and
added to the edge list in a column called “component”.
The graph command has the option to recover components (technical multiplets) into smaller
components using community detection to find and remove problematic edges.
(See --multiplet_recovery
). The information to keep track of the original and
newly recovered components are stored in a file (components_recovered.csv).
Cell-calling, filtering, and annotation
Output files
-
pixelator
annotate
<sample-id>.dataset.pxl
<sample-id>.meta.json
: Command invocation metadata.<sample-id>.rank_vs_size.png
<sample-id>.raw_components_metrics.csv
<sample-id>.report.json
: Statistics for the analysis step.<sample-id>.umap.png
logs
<sample-id>.pixelator-annotate.log
: pixelator log output.
This step uses the pixelator single-cell annotate
command.
The annotate command takes as input the edge list (CSV) file generated in the graph command. It parses, and filters the edgelist to find putative cells, and it will generate a pxl file containing the edgelist, and an (AnnData object)[https://anndata.readthedocs.io/en/latest/] as well as some useful metadata.
Downstream analysis
Output files
-
pixelator
-
analysis
<sample-id>.dataset.pxl
: PXL file with the analysis results added to it.<sample-id>.meta.json
: Command invocation metadata.<sample-id>.report.json
: Statistics for the analysis step.
-
logs
<sample-id>.pixelator-analysis.log
: pixelator log output.
-
This step uses the pixelator single-cell analysis
command.
Downstream analysis is performed on the pxl
file generated by the previous stage.
The results of the analysis is added to the pxl file.
Currently, the following analysis are performed:
- polarization scores (enable with
--compute_polarization
) - co-localization scores (enable with
--compute_colocalization
)
Each analysis can be disabled by using respectively --compute_polarization false
or --compute_colocalization false
.
This entire step can also be skipped using the --skip_analysis
option.
Generate reports
Output files
pixelator
report
<sample-id>_report.html
: Pixelator summary report.
logs
<sample-id>.pixelator-report.log
: Pixelator log output.
This step uses the pixelator single-cell report
command.
This step will collect metrics and outputs generated by previous stages
and generate a report in HTML format for each sample.
This step can be skipped using the --skip_report
option.
More information on the report can be found in the pixelator documentation
Pipeline information
Output files
pipeline_info/
- Reports generated by Nextflow:
execution_report.html
,execution_timeline.html
,execution_trace.txt
andpipeline_dag.dot
/pipeline_dag.svg
. - Reports generated by the pipeline:
pipeline_report.html
,pipeline_report.txt
andsoftware_versions.yml
. Thepipeline_report*
files will only be present if the--email
/--email_on_fail
parameter’s are used when running the pipeline. - Reformatted samplesheet files used as input to the pipeline:
samplesheet.valid.csv
. - Metadata file with software versions, environment information and pipeline configuration for debugging:
metadata.json
- Parameters used by the pipeline run:
params.json
.
- Reports generated by Nextflow:
Nextflow provides excellent functionality for generating various reports relevant to the running and execution of the pipeline. This will allow you to troubleshoot errors with the running of the pipeline, and also provide you with other information such as launch commands, run times and resource usage.