Getting Started

This page explains the basic features of COLT from installation to running simulations.

Compilation

Obtain the COLT source code by cloning the repository:

$ git clone git@bitbucket.org:aaron_smith/colt.git

If you do not have a bitbucket account set up with ssh keys then you can still sync via https:

$ git clone https://aaron_smith@bitbucket.org/aaron_smith/colt.git

This creates a new folder named colt containing the source code (source), documentation (docs), and compilation files (e.g. Makefile). After installing the required libraries (see Dependencies) and setting up the build environment (see Makefile) the executable may be compiled with:

$ cd colt
$ make -j

Dependencies

Note that there are required dependencies:

  • MPI: Message Passing Interface library for standard code parallelization

  • HDF5: Hierarchical Data Format library for reading and writing files (with C++ wrappers)

  • yaml-cpp: Library for parsing YAML config files (github repository)

There is also an additional dependency for Voronoi geometry:

Instructions for MacOS

On MacOS all of these dependencies can be conveniently installed with the Homebrew package manager with the following command:

$ brew install openmpi libomp hdf5 yaml-cpp cgal

Adjustments for Apple Silicon: The default Apple Silicon compiler does not support OpenMP. As a workaround replace openmpi with mpich and then rebuild with the following command:

$ brew reinstall --build-from-source mpich

This may first require resolving any package conflicts by unlinking or uninstalling as suggested by Homebrew:

$ brew unlink conflicting-package
$ brew uninstall conflicting-package
$ brew install mpich

After completing these steps, relaunch the terminal to proceed with the COLT compilation using the command: make -j. If this fails with an error about libomp then you may need to either manually link the OpenMP library, e.g. ln -s /opt/homebrew/opt/libomp/lib/libomp.dylib /opt/homebrew/lib/libomp.dylib, or set the LD_LIBRARY_PATH environment variable, e.g. export LD_LIBRARY_PATH=/opt/homebrew/opt/libomp/lib:$LD_LIBRARY_PATH. Please send a message to the COLT developers if you encounter any other issues.

Instructions for Linux Clusters

On shared Linux clusters and supercomputers some dependencies may be available system wide. Use module avail or module spider to find the correct module names. It is convenient to add a module file ~/.colt and load these with . ~/.colt before compiling or running:

#!/bin/bash
module purge; module load openmpi hdf5 yaml-cpp cgal

In many cases you may still need to install a dependency locally. For example, to install yaml-cpp in your home directory:

$ module load cmake  # After loading the appropriate gcc/mpi modules
$ git clone https://github.com/jbeder/yaml-cpp.git
$ cd yaml-cpp; mkdir build; cd build  # Build in a separate directory
$ cmake -DYAML_BUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=${HOME} ..
$ make -j; make install  # Installs to ${HOME}/lib[64] and ${HOME}/include

Makefile

The provided Makefile allows a flexible build based on the specific computing environment. Currently, COLT is designed to have minimal compile time options summarized in the provided template-defines.yaml file:

MACHINE: gcc       # Makefile build system
GEOMETRY: voronoi  # Geometry type: slab, spherical, cartesian, octree, voronoi
HAVE_CGAL: true    # Include the CGAL library (requires voronoi geometry)

These options can either be provided in the Makefile itself or in a yaml (or bash) file with a default location of defines.yaml (see note below). The main option is the MACHINE with known systems: homebrew, gcc, pleiades, comet, odyssey, stampede, and supermuc. Other systems may require slight modification based on the existing examples in the Makefile. The other main option is the GEOMETRY with known types: slab, spherical, cartesian, octree, voronoi. More information is given in the initial conditions specifications.

Note

It is usually most convenient to perform a default in-place build, which generates the compiled object files and executable in the same directory. However, to allow the flexibility of out-of-place builds you may customize the DEFS, BUILD, and EXE paths, e.g. to run multiple geometries with the same source distribution.

$ make -j -C /source/path DEFS={path}/defines.yaml BUILD={path}/build EXE={path}/colt

Running a Simulation

After compilation, the executable colt can be run by specifying a YAML configuration file, usually called config.yaml. In case you are streamlining analysis over several snapshots with the same config file you can also specify the snapshot number. It is also importantant to specify the number of OpenMP threads before execution. A typical command to run COLT with 4 tasks and 16 threads looks like the following:

$ export OMP_NUM_THREADS=16
$ mpirun -n 4 ./colt config.yaml [snapshot]

Note

Please consult the system documentation about running hybrid MPI + OpenMP jobs as some environments may require special commands, e.g. a linux cluster might need --bind-to none to release the core binding. Supercomputers typically supply job scripts and specific advice for hybrid execution within their user guide documentation.