RAxML - 8.2.12¶
Basic information¶
- Deploy date: 2 August 2018
- Official Website: https://sco.h-its.org/exelixis/web/software/raxml/
- License: GNU GENERAL PUBLIC LICENSE - Version 3 (GPL 3.0), 29 June 2007
- Installed on: Apolo II, Cronos
- Available versions: Hybrid (MPI and Threads), MPI
Installation¶
This entry covers the entire process performed for the installation and configuration of RAxML on a cluster with the conditions described below.
Usage¶
This subsection describes how to use RAxML on a cluster and the necessary elements to get a good performance.
Before launch RAxML you should read next documentation
The RAxML v8.2.X Manual (Mandatory) *
- (When to use which Version?)
Hybrid Parallelization of the MrBayes & RAxML Phylogenetics Codes
- (Hybrid MPI/Pthreads)
Note
It is really important to understand how the HYBRID version works, since this is the only available version for HPC scenarios. Additionally, understanding the behavior of the HYBRID version is the key to properly use the computational resources and achieve better performance.
In the following example we will run 100 bootstrap replicates (MPI parallelization) and independent tree searches (PThreads - shared memory) for each bootstrap replicate, all of this using SLURM (Resource Manager) to spawn properly the processes across the nodes.
#!/bin/bash #SBATCH --partition=longjobs #SBATCH --nodes=3 #SBATCH --ntasks-per-node=2 #SBATCH --cpus-per-task=16 #SBATCH --time=48:00:00 #SBATCH --job-name=RAxML_test #SBATCH -o result_%N_%j.out #SBATCH -e result_%N_%j.err # Default variables export SBATCH_EXPORT=NONE export OMP_NUM_THREADS=1 # Load RAxML module file module load raxml/8.2.12_intel-17.0.1 # Launch RAxML (MPI with srun (pmi2) and PThreads using '-T' argument # and SLURM_CPUS_PER_TASK environment variable. srun --mpi=pmi2 raxmlHPC-HYBRID-AVX2 -s funiji.fasta -T $SLURM_CPUS_PER_TASK \ -X -f a -n out_1 -m GTRGAMMA -x 6669 -p 2539 -N 100
Note
Node quick specs (Apolo II): 32 Cores, 64 GB RAM
- - -ntasks-per-node → MPI process per node
- - -cpus-per-task → PThreads per MPI process
- - -nodes → Number of nodes
In this case, we will use 2 MPI process per node and each MPI process has 16 PThreads; for a total of 32 processes per node. Also we will use 3 nodes.
References¶
Authors¶
- Mateo Gómez-Zuluaga <mgomezz@eafit.edu.co>