Podman
Podman is a container platform focused on security. It is very robust and is supported by Red Hat. Podman supports images in the Open Container Initiative (OCI) format – these are also supported by Docker, Kubernetes and Singularity among others.
To get more information about Podman, the following manual pages are available:
man podman
man podman-run
man podman-build
Using Containers with Podman
To use a container non-interactively with Podman, the basic syntax is:
podman run --rm CONTAINER_NAME ARGS
To use a container interactively with Podman, the basic syntax is:
podman run -it --rm CONTAINER_NAME ARGS
To use a container non-interactively with a GPU with Podman, the basic syntax is:
podman run --rm --device nvidia.com/gpu=all CONTAINER_NAME ARGS
To make a folder on the host available within the container environment, you must additionally use a bind-mount. For example, to make your home folder available at the same place inside a non-interactive container:
podman run --volume "$HOME:$HOME" --rm CONTAINER_NAME ARGS
The containers will be pulled automatically if they are available from DockerHub and not available locally.
Building Containers with Podman
With Podman, containers can be built by using a Dockerfile
. See Dockerfile reference for the full syntax.
To build a container, create a folder for the Dockerfile
, say CONTAINER_FOLDER
. Then store or write a Dockerfile
at CONTAINER_FOLDER/Dockerfile
. Then, to build the container, use:
podman build -t CONTAINER_NAME CONTAINER_FOLDER
Examples of Building and Using Podman Containers
Some examples of building and using containers are given below:
Build and Use a Container with Tensorflow and Various Python Packages (Podman)
To extend the tensorflow/tensorflow image on DockerHub) by installing various Python packages inside it, you could use the dockerfile below:
From tensorflow/tensorflow:2.17.0-gpu
RUN pip install pandas matplotlib scikit-learn pyyaml keras biopython numba viennarna keras_tuner
Assuming the above is stored as a file called Dockerfile
in a folder called tensorflow-build
, it can be built with:
podman build -t tensorflow-plus tensorflow-build
The container can then be executed with:
podman run -it --rm tensorflow-plus
If you have a python script in your home folder called myscript.py
, you could create (and submit – see Accessing Compute Resources) a job file such as the following to run the python script in the container on a CPU node.
#!/bin/bash
#SBATCH -p cpu
podman run --volume "$HOME:$HOME" --rm tensorflow-plus "$HOME/myscript.py"
To run the same python script on a GPU node with a GPU available, you could use the following:
#!/bin/bash
#SBATCH -p gpu_l40s
#SBATCH --gpus 1
podman run --volume "$HOME:$HOME" --rm --device nvidia.com/gpu=all tensorflow-plus "$HOME/myscript.py"
Build a Container from Rocky Linux 9 with Python and requirements.txt
(Podman)
To build a container from Rocky Linux 9 (functionally equivalent to Red Hat Enterprise Linux 9) and, install Python and arbitrary Python packages according to a requirements.txt
file (in the same folder as the Dockerfile
below), you could use the Dockerfile
below:
From rockylinux:9
COPY requirements.txt /requirements.txt
RUN dnf install -y python3 python3-pip
RUN python3 -m pip install -r /requirements.txt
Delft3D
Note that the following example is advanced. It is shared for the reader’s interest or in case there are any readers with the same issue.
The following set of Dockerfile
’s are used to build containers providing the Delft3D FM Suite. This can be be used by anyone with containerisation software including Podman, Docker, Buildx and more. Note that this example builds from a Red Hat Universal Base Image (UBI). This can be switched to e.g. RockyLinux if the reader prefers.
Intel
First a container with Intel’s utilities is built:
FROM registry.access.redhat.com/ubi9
#FROM rockylinux:9
COPY oneAPI.repo /etc/yum.repos.d/oneAPI.repo
RUN yum update -y && yum groupinstall -y 'Development Tools'
RUN yum install -y intel-basekit-2023.2.0 intel-hpckit-2023.2.0 intel-basekit-32bit-2023.2.0 intel-hpckit-32bit-2023.2.0 procps-ng
Intel Delft3D Base
The above container is then extended with the dependencies of Delft3D:
FROM localhost/intel
RUN dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm && crb enable && dnf config-manager --set-enabled codeready-builder-for-rhel-9-x86_64-rpms
RUN yum install -y cmake ninja-build hdf5 hdf5-devel netcdf netcdf-devel metis metis-devel gdal gdal-devel util-linux sqlite sqlite-devel libtiff libtiff-devel unzip git proj proj-devel procps-ng patchelf subversion uuid uuid-devel libuuid libuuid-devel
RUN mkdir /delft3d /netcdf /cmake /petsc
RUN curl -L 'https://github.com/Unidata/netcdf-fortran/archive/refs/tags/v4.6.1.tar.gz' | tar --strip-components=1 -xzvC /netcdf
COPY config-intel.sh /netcdf
COPY delft3d-all-release-2025.01.zip /delft3d.zip
RUN cd /delft3d && unzip /delft3d.zip
RUN . /opt/intel/oneapi/setvars.sh && cd /netcdf && ./config-intel.sh && make -j && make check -j && make install -j
ENV PKG_CONFIG_PATH=/usr/local/netcdf-ifort/4.6.1/lib/pkgconfig
RUN curl -L 'https://github.com/Kitware/CMake/releases/download/v3.31.6/cmake-3.31.6-linux-x86_64.tar.gz' | tar --strip-components=1 -xzvC /cmake
RUN yum install -y hostname
RUN curl -L 'https://web.cels.anl.gov/projects/petsc/download/release-snapshots/petsc-3.19.6.tar.gz' | tar --strip-components=1 -xzvC /petsc
RUN . /opt/intel/oneapi/setvars.sh && export I_MPI_CC=icx && export I_MPI_CXX=icpx && export I_MPI_F90=ifort && export PETSC_USE_FORTRAN_BINDINGS=1 && cd /petsc && ./configure --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 --prefix=/usr --with-fortran=1 --with-debugging=0 COPTFLAGS='-O3 -march=native' CXXOPTFLAGS='-O3 -march=native' FOPTFLAGS='-O3 -march=native' && make all DESTDIR=/tmp/petsc-pkg -j && make install DESTDIR=/tmp/petsc-pkg -j
RUN cp -r /tmp/petsc-pkg/usr /
ENV PKG_CONFIG_PATH=/usr/local/netcdf-ifort/4.6.1/lib/pkgconfig:/usr/lib/pkgconfig
This copies the Delft3D source code and the config-intel.sh
script below from the same directory as the Dockerfile
:
export PATH="$PATH:/opt/intel/oneapi/compiler/latest/linux/bin/intel64/"
export CDFROOT="/usr"
export LD_LIBRARY_PATH="${CDFROOT}/lib:${LD_LIBRARY_PATH}"
export LDFLAGS="-L${CDFROOT}/lib -I${CDFROOT}/include":
export OPTIM="-O3 -mcmodel=large -fPIC ${LDFLAGS}"
export CC=icx
export CXX=icx
export FC=ifort
export F77=ifort
export F90=ifort
export CPP='icx -E -mcmodel=large'
export CXXCPP='icx -E -mcmodel=large'
export CPPFLAGS="-DNDEBUG -DpgiFortran ${LDFLAGS}"
export CFLAGS=" ${OPTIM}"
export CXXFLAGS=" ${OPTIM}"
export FCFLAGS=" ${OPTIM}"
export F77FLAGS=" ${OPTIM}"
export F90FLAGS=" ${OPTIM}"
./configure --prefix=/usr/local/netcdf-ifort/4.6.1 --enable-large-file-tests --with-pic
From this image, various images are built providing various Delft3D ofFerings.
Delft3D FM
FROM localhost/intel-delft3d-base
RUN export PATH="/cmake/bin:$PATH" FC=mpiifort CXX=mpiicpx CC=mpiicx && . /opt/intel/oneapi/setvars.sh && cd /delft3d && bash ./build.sh all
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
The file below is entrypoint.sh
in this case:
#!/bin/bash
. /etc/profile
. /opt/intel/oneapi/setvars.sh 1>/dev/null 2>&1
export PATH="$PATH:/delft3d/build_all/install/bin"
exec "$@"
Delft3D Flow2D3D
FROM localhost/intel-delft3d-base
RUN export PATH="/cmake/bin:$PATH" FC=mpiifort CXX=mpiicpx CC=mpiicx && . /opt/intel/oneapi/setvars.sh && cd /delft3d && bash ./build.sh flow2d3d
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
The file below is entrypoint.sh
in this case:
#!/bin/bash
. /etc/profile
. /opt/intel/oneapi/setvars.sh 1>/dev/null 2>&1
export PATH="$PATH:/delft3d/build_flow2d3d/install/bin"
exec "$@"
Delft3D Delft3D4
FROM localhost/intel-delft3d-base
RUN export PATH="/cmake/bin:$PATH" FC=mpiifort CXX=mpiicpx CC=mpiicx && . /opt/intel/oneapi/setvars.sh && cd /delft3d && bash ./build.sh delft3d4
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
The file below is entrypoint.sh
in this case:
#!/bin/bash
. /etc/profile
. /opt/intel/oneapi/setvars.sh 1>/dev/null 2>&1
export PATH="$PATH:/delft3d/build_delft3d4/install/bin"
exec "$@"
Execution
We then expose these images by converting them to Singularity images using the methods in Singularity and use wrapper scripts for ease of use.