Sheffield Hallam University BMRC Cluster Configuration

This document provides guidelines for using nf-core pipelines on Sheffield Hallam University’s BMRC High-Performance Computing (HPC) cluster. The custom configuration file for this cluster enables optimised resource usage and workflow compatibility within the BMRC HPC environment, facilitating efficient execution of nf-core workflows.


Table of Contents

  1. Introduction
  2. Requirements
  3. Configuration Details
  4. Usage
  5. Troubleshooting
  6. Support and Contact

Introduction

This configuration file is specifically designed for running nf-core workflows on the BMRC HPC cluster at Sheffield Hallam University. The configuration integrates optimal resource parameters and scheduling policies to ensure efficient job execution on the cluster, aligning with internal HPC policies and specifications.

The cluster configuration:

Requirements

To use this configuration, you must have:

  • Access to BMRC HPC: Ensure your user account is enabled for HPC access at Sheffield Hallam University. The GlobalProtect VPN is required for remote access. For setup instructions, refer to SHU VPN Guide.
  • Nextflow: Version 22.10.6 or later is recommended for optimal compatibility.

For a detailed guide to setting up Nextflow and running nf-core pipelines on the BMRC cluster, refer to Running nf-core Pipelines on SHU BMRC Cluster.

Configuration Details

The configuration has been tailored for the BMRC HPC, providing preset values for CPUs, memory, and scheduling to align with HPC policies.

Core Configuration

  • Cluster Scheduler: slurm
  • Max Retries: 2 (automatically reattempts failed jobs)
  • Queue Size: 50 jobs
  • Submit Rate Limit: 1 job per second

Resource Allocation

Each nf-core workflows will automatically receive the following default resource maxima:

ResourceSetting
CPUs64
Memory1007 GB
Time999 hours

Container Support

The configuration supports Apptainer for containerised workflows, with automatic mounting enabled, allowing seamless access to necessary filesystems within containers.

Cleanup

Intermediate files from successful runs will be automatically deleted to free up storage.

Usage

To launch an nf-core pipeline on the BMRC cluster using the shu_bmrc profile:

nextflow run nf-core/<pipeline_name> -profile shu_bmrc

Troubleshooting

If you encounter issues, ensure you have:

  • Followed the user guide on the BMRC HPC documentation site (see below).
  • Specified the correct profile (shu_bmrc) for the cluster.
  • Checked for sufficient permissions on the BMRC HPC cluster.
  • Verified that Apptainer is enabled and accessible within your environment.

Support and Contact

For support or questions, contact:

Config file

See config file on GitHub

shu_bmrc.config
/*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nextflow config file for Sheffield Hallam University BMRC Cluster
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Author: Dr Lewis A Quayle
Mail: l.quayle@shu.ac.uk
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
 
 
// params scope - displayed in header summary of each run
 
params {
 
    config_profile_description = 'Sheffield Hallam Universty - BMRC HPC'
    config_profile_contact = 'Dr Lewis A Quayle (l.quayle@shu.ac.uk)'
    config_profile_url = 'https://bmrc-hpc-documentation.readthedocs.io/en/latest/'
 
}
 
 
// process scope - hpc configuration and auto-retry
 
process {
 
    resourceLimits = [
    cpus: 64,
    memory: 1007.GB,
    time: 999.h
    ]
    executor = 'slurm'
    maxRetries = 2
 
}
 
 
// executor scope - scheduler settings
 
executor {
 
    queueSize = 50
    submitRateLimit = '1 sec'
 
}
 
 
// container scope
 
apptainer {
 
    enabled = true
    autoMounts = true
 
}
 
 
// automatically delete intermediate work directory on successful completion of a run
 
cleanup = true