Installation on specific platforms

The following describes installation details for various systems and platforms that SmartSim may be used on.

Customizing environment variables

Various environment variables can be used to control the compilers and dependencies for SmartSim. These are particularly important to set before the smart build step to ensure that the Orchestrator and machine-learning backends are compiled with the desired compilation environment.

Note

The compilation environment that SmartSim is compiled with does not necessarily have to be compatible with the SmartRedis library and the simulation application that will be launched by SmartSim. To ensure that this works as intended however, please be sure to set the correct environment for the simulation using the RunSettings.

All of the following environment variables must be exported to ensure that they are used throughout the entire build process. Additionally at runtime, the environment in which the Orchestrator is launched must have the cuDNN and CUDA Toolkit libraries findable by the link loader (e.g. available in the LD_LIBRARY_PATH environment variable).

Compiler environment

Unlike SmartRedis, we strongly encourage users to only use the GNU compiler chain to build the SmartSim dependencies. Notably, RedisAI has some coding conventions that prevent the use of Intel compiler chain. If a specific compiler should be used (e.g. the Cray Programming Environment wrappers), the following environment variables will control the C and C++ compilers:

  • CC: Path to the C compiler

  • CXX: Path the C++ compiler

GPU dependencies (non-root)

The Nvidia installation instructions for CUDA Toolkit and cuDNN tend to be tailored for users with root access. For those on HPC platforms where root access is rare, manually downloading and installing these dependencies as a user is possible.

wget https://developer.download.nvidia.com/compute/cuda/11.4.4/local_installers/cuda_11.4.4_470.82.01_linux.run
chmod +x cuda_11.4.4_470.82.01_linux.run
./cuda_11.4.4_470.82.01_linux.run --toolkit  --silent --toolkitpath=/path/to/install/location/

For cuDNN, follow Nvidia’s instructions, and copy the cuDNN libraries to the lib64 directory at the CUDA Toolkit location specified above.

HPE Cray supercomputers

On certain HPE Cray machines, the SmartSim dependencies have been installed system-wide though specific paths and names might vary (please contact the team if these instructions do not work).

module use -a /lus/scratch/smartsim/local/modulefiles
module load cudatoolkit/11.4 cudnn git-lfs

module unload PrgEnv-cray PrgEnv-intel PrgEnv-gcc
module load PrgEnv-gnu
module switch gcc/11.2.0

export CRAYPE_LINK_TYPE=dynamic

This should provide all the dependencies needed to build the GPU backends for the ML bakcends. Users can thus proceed with their preferred way of installing SmartSim either from PyPI or from source.

Cheyenne at NCAR

Since SmartSim does not currently support the Message Passing Toolkit (MPT), Cheyenne users of SmartSim will need to utilize OpenMPI.

The following module commands were utilized to run the examples:

$ module purge
$ module load ncarenv/1.3 gnu/8.3.0 ncarcompilers/0.5.0 netcdf/4.7.4 openmpi/4.0.5

With this environment loaded, users will need to build and install both SmartSim and SmartRedis through pip. Usually we recommend users installing or loading miniconda and using the pip that comes with that installation.

$ pip install smartsim
$ smart build --device cpu  #(Since Cheyenne does not have GPUs)

To make the SmartRedis library (C, C++, Fortran clients), follow these steps with the same environment loaded.

# clone SmartRedis and build
$ git clone https://github.com/SmartRedis.git smartredis
$ cd smartredis
$ make lib

Summit at OLCF

Since SmartSim does not have a built PowerPC build, the build steps for an IBM system are slightly different than other systems.

Luckily for us, a conda channel with all relevant packages is maintained as part of the OpenCE initiative. Users can follow these instructions to get a working SmartSim build with PyTorch and TensorFlow for GPU on Summit. Note that SmartSim and SmartRedis will be downloaded to the working directory from which these instructions are executed.

# setup Python and build environment
export ENV_NAME=smartsim-0.4.2
git clone https://github.com/CrayLabs/SmartRedis.git smartredis
git clone https://github.com/CrayLabs/SmartSim.git smartsim
conda config --prepend channels https://ftp.osuosl.org/pub/open-ce/1.4.1/
conda create --name $ENV_NAME -y  python=3.9 \
                                  git-lfs \
                                  cmake \
                                  make \
                                  cudnn=8.1.1_11.2 \
                                  cudatoolkit=11.2.2 \
                                  tensorflow=2.6.2 \
                                  libtensorflow=2.6.2 \
                                  pytorch=1.9.0 \
                                  torchvision=0.10.0
conda activate $ENV_NAME
export CC=$(which gcc)
export CXX=$(which g++)
export LDFLAGS="$LDFLAGS -pthread"
export CUDNN_LIBRARY=/ccs/home/$USER/.conda/envs/$ENV_NAME/lib/
export CUDNN_INCLUDE_DIR=/ccs/home/$USER/.conda/envs/$ENV_NAME/include/
module load cuda/11.4.2
export LD_LIBRARY_PATH=$CUDNN_LIBRARY:$LD_LIBRARY_PATH:/ccs/home/$USER/.conda/envs/$ENV_NAME/lib/python3.9/site-packages/torch/lib
module load gcc/9.3.0
module unload xalt
# clone SmartRedis and build
pushd smartredis
make lib && pip install .
popd

# clone SmartSim and build
pushd smartsim
pip install .

# install PyTorch and TensorFlow backend for the Orchestrator database.
export Torch_DIR=/ccs/home/$USER/.conda/envs/$ENV_NAME/lib/python3.9/site-packages/torch/share/cmake/Torch/
export CFLAGS="$CFLAGS -I/ccs/home/$USER/.conda/envs/$ENV_NAME/lib/python3.9/site-packages/tensorflow/include"
export SMARTSIM_REDISAI=1.2.5
export Tensorflow_BUILD_DIR=/ccs/home/$USER/.conda/envs/$ENV_NAME/lib/python3.9/site-packages/tensorflow/
smart build --device=gpu --torch_dir $Torch_DIR --libtensorflow_dir $Tensorflow_BUILD_DIR -v

# Show LD_LIBRARY_PATH for future reference
echo "SmartSim installation is complete, LD_LIBRARY_PATH=$LD_LIBRARY_PATH"

When executing SmartSim, if you want to use the PyTorch and TensorFlow backends in the orchestrator, you will need to set up the same environment used at build time:

module load cuda/11.4.2
export CUDNN_LIBRARY=/ccs/home/$USER/.conda/envs/$ENV_NAME/lib/
export LD_LIBRARY_PATH=/ccs/home/$USER/.conda/envs/smartsim/lib/python3.8/site-packages/torch/lib/:$LD_LIBRARY_PATH:$CUDNN_LIBRARY
module load gcc/9.3.0
module unload xalt

Site Installation

Certain HPE customer machines have a site installation of SmartSim. This means that users can bypass the smart build step that builds the ML backends and the Redis binaries. Users on these platforms can install SmartSim from PyPI or from source with the following steps replacing COMPILER_VERSION and SMARTSIM_VERSION with the desired entries.

module use -a /lus/scratch/smartsim/local/modulefiles
module load cudatoolkit/11.4 cudnn smartsim-deps/COMPILER_VERSION/SMARTSIM_VERSION
pip install smartsim[ml]
smart build --only_python_packages --device gpu [--onnx]