NVIDIA® CUDA® Toolkit
The Nvidia CUDA Toolkit is available on the login node and on the GPU nodes of the Lovelace cluster.
Loading CUDA
Many applications will autodetect CUDA will not require CUDA to be loaded manually. To load CUDA manually, simply run:
module load cuda-toolkit
To see the the current CUDA version that is supported on the graphics nodes of the cluster, run the following:
module --default avail cuda-toolkit
CUDA is subect to version compatibility guarantees. [1] However, it is reccomended by the HPC Admin Team that users build their applications against the same version of CUDA, as given above, where possible.
Using CUDA
CUDA documentation [2] and Nvidia’s CUDA examples repository [3] are good resources for writing CUDA applications in the C or C++ programming languages.
Alternatively, users can use packages in higher level programming languages such as CUDA.jl [4] for Julia or CuPy [5] for Python.
Please also familiarise yourself with the job submission parameters needed to request GPUs as in L40S and H100.