NAMD 2.13

Basic Information

Installation

This entry covers the entire process performed for the installation and configuration of NAMD 2.13 on a cluster.

Usage

This section describes the method to submit jobs with the resource manager SLURM.

  1. Run WRF from a SLURM bash script, for this example we will use a test case from NAMD official page namd-tutorial-files

    sbatch example.sh
    

The following code is an example for running NAMD using SLURM:

#!/bin/bash
#SBATCH --job-name=wps-wrf                      # Job name
#SBATCH --mail-type=ALL                         # Mail notification
#SBATCH --mail-user=<user>@<domain>             # User Email
#SBATCH --error=%x-%j.err                       # Stderr (%j expands to jobId)
#SBATCH --output=%x-%j.out                      # Stdout (%j expands to jobId)
#SBATCH --ntasks=2                              # Number of tasks (processes)
#SBATCH --nodes=1                               # Number of nodes
#SBATCH --time=3:00:00                          # Walltime
#SBATCH --partition=longjobs                    # Partition

##### MODULES #####

module load namd/2.13-gcc_CUDA

namd2 ubq_ws_eq.conf

Authors