Toward a Containerized Nvidia CUDA, TensorFlow and OpenCV

Data Machines Corp. (DMC) works in fields that encompass Machine Learning (ML) and Computer Vision (CV). In those fields, we aim to empower researchers to discover, innovate, and invent. To do so, we strive to provide the tools to support these efforts.

Containers are powerful abstractions to the Linux kernel and allow for the development and test of projects through the pre-setup of required software and libraries, while limiting modifications to the host system. By using interactive shells with mounted directories on a running container, including X11 display if needed, researchers are able to test algorithms and ideas.

To do so, a one-time system setup is needed. In this post, we will be describing the steps needed to set up a Ubuntu 18.04 Linux system to run Nvidia Docker (v2) with an ultimate goal to use CUDA-optimized TensorFlow and OpenCV within a container.

Setting up the host system to run Nvidia Docker (v2)

Prerequisites

A system that can host an amd64 (AMD64 & Intel EM64T) Ubuntu Linux installation (at least 2 GHz dual core processor, recommended 8 GB system memory, and 50 GB of free hard drive space) with internet access and an Nvidia CUDA-enabled GPU card (see <https://developer.nvidia.com/cuda-gpus for additional information>).

Install Ubuntu Linux 18.04

The following instructions run on a setup based on Ubuntu Linux 18.04 LTS:

  • Download a 64-bit PC (AMD64) desktop image of 18.04 from http://releases.ubuntu.com/18.04/.

  • The official amd64 Ubuntu Installation Guide can be found at https://help.ubuntu.com/lts/installation-guide/amd64/index.html.

  • Make sure that you have a sudo enabled user with network access to download additional packages.

  • After installation, perform system updates as needed, including confirming that gcc and build tools are installed; you can do so using sudo apt-get -y update && sudo apt-get install -y build-essential from a command prompt.

Install CUDA 10.1 drivers & libraries

In order to use CUDA on the Linux box, a Nvidia card needs to be available and recognized by the kernel. Confirm that the Linux kernel sees the Nvidia card(s) using lspci | grep -i nvidia to enumerate the list of available GPUs.

The following command lines will help add the CUDA 10.1 repository to your Ubuntu Linux (Line 1), add Nvidia's public key to the authorized ones (Line 2), update the local package list (Line 3), and install the CUDA packages onto the Ubuntu Linux system (Line 4).

sudo dpkg -i cuda-repo-ubuntu1804_10.1.168-1_amd64.deb
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda

At the time of writing this document, CUDA 10.1.168 was the latest package; up-to-date instructions are available from https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804&target_type=debnetwork.

If you have trouble running those steps, additional installation instructions can be found at https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#pre-installation-actions.

During this installation, the nouveau driver will be added to the modprobe (Kernel modules) blacklist file to prevent it from loading.

After installation of the CUDA 10.1 drivers and support libraries to the Linux system, add the following to the .bashrcfile (preferably before the "interactive" section of the file, if it has one), then reboot.

## CUDA add on
export PATH=/usr/local/cuda-10.1/bin:/usr/local/cuda-10.1/NsightCompute-2019.1${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64\
	   ${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

Rebooting will allow the Nvidia driver to be loaded as the default video driver for the Linux system. To confirm that the driver is properly loaded, use cat /proc/driver/nvidia/version; and to confirm it is functional, calling the Nvidia System Management Interface (SMI) will provide information on the GPU(s) available and their resources; this is done using nvidia-smi.

Installing a few additional third-party libraries is recommended before compilation of the code sample provided by Nvidia, to confirm compilation is functional and code can run.

sudo apt-get install g++ freeglut3-dev build-essential libx11-dev \
	libxmu-dev libxi-dev libglu1-mesa libglu1-mesa-dev

The code sample can then be compiled in a temporary directory:

cd ~; mkdir -p Temp; cd Temp
cuda-install-samples-10.1.sh .
cd NVIDIA_CUDA-10.1_Samples
make

It is possible to run the compilation step (make) faster by using as many processors as available (if the system has enough memory to support it); simply run make -j$(nproc).

Once the build completes, enumerate the properties of the CUDA devices present in the system using the newly compiled ./bin/x86_64/linux/release/deviceQuery, and test the post-processing of an image rendered in OpenGL using CUDA while using ./3_Imaging/postProcessGL/postProcessGL. Additional samples are provided; more detail about them is available at https://docs.nvidia.com/cuda/cuda-samples/index.html.

If you need support for Nsight Eclipse (a full-featured IDE powered by the Eclipse platform that provides an all-in-one integrated environment to edit, build, debug, and profile CUDA-C applications), see https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#install-nsight-plugins.

Post installation FAQ from Nvidia can be found at https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#faq.

Docker Community Edition

Next, proceed with the installation of the stable Community Edition (CE) of Docker.

The following text will describe steps needed for the current installation; the full installation instructions from Docker can be found at https://docs.docker.com/install/linux/docker-ce/ubuntu/.

First, install tools needed for the CE install:

sudo apt-get install apt-transport-https ca-certificates \
	curl gnupg-agent software-properties-common

Add the Docker public key to the local trusted keys:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Ubuntu Linux Stable Docker repository to the local repositories:

sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

Update the repository information using sudo apt-get update, then install the core components for Docker CE, its command line interface (CLI), and the containerd runtime using sudo apt-get install docker-ce docker-ce-cli containerd.io. Docker runs under the docker group, which will require your user to sudo in order to be able to use the command line. Note that the docker group grants privileges equivalent to the root user; for details on how this impacts security in your system, see "Docker Daemon Attack Surface" at https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface. Understanding this, the following adds your user to the docker group: sudo usermod -aG docker $USER. The new group will not be taken into effect until you log out (in some cases, reboot) and log back in. Before continuing, check that the docker group is listed as part of your groups using id.

Finally, confirm that docker is operational: docker run -rm hello-world. The -rm will delete the container once it is done running (so you do not have unknown containers left when you run a docker ps -a).

Nvidia Docker

Nvidia Docker is a runtime for the Docker daemon that abstracts the Nvidia GPU(s) available to the host OS using the Nvidia driver, such that a container CUDA toolkit uses the host's Nvidia driver. For more details on the technology, see https://devblogs.nvidia.com/gpu-containers-runtime/.

The Github repository for Nvidia Docker (v2) is available at https://github.com/NVIDIA/nvidia-docker.

On our Ubuntu Linux, adding the runtime is done by adding the Nvidia public key to the trusted keys, and adding the Nvidia repository to the system list:

curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-docker.list

After updating the package list, install the nvidia-docker2 runtime, and restart the Docker daemon to have it add the new runtime to its available list:

sudo apt-get update && sudo apt-get install -y nvidia-docker2 && sudo pkill -SIGHUP dockerd

Confirm that it is functional by running the same nvidia-smi command through the container interface:

docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi

DMC's CUDA/TensorFlow/OpenCV

TensorFlow [https://www.tensorflow.org/] is an open source library to help users develop and train ML models using a comprehensive, flexible ecosystem of tools and libraries. TensorFlow is available as a GPU-optimized container image, running CUDA 9.0 on an Ubuntu 16.04, to create virtual environments that isolate a TensorFlow installation from the rest of the system while sharing the resources of the host machine.

OpenCV (Open Source Computer Vision Library) [https://opencv.org/] is an open source computer vision and machine learning software library, with more than 2500 optimized algorithms to support classic and state-of-the-art computer vision and machine learning algorithms. OpenCV can be built to support CUDA (for GPU support) and OpenMP (for shared-memory and high-level parallelism on multi-core CPUs) [https://www.openmp.org/].

DMC has built a cuda_tensorflow_opencv container image by compiling a CUDA/OpenMP-optimized OpenCV FROM the GPU-optimized TensorFlow container image. This image is designed to contain many needed tools to be used by data scientists and to support machine learning research, with frameworks such as NumPy [https://www.numpy.org/], pandas [https://pandas.pydata.org/], and Keras [https://keras.io/].

Because of its integration of many core tools and libraries useful to researchers, the image can be used as a FROM for further Docker images to build from. For example, it has successfully been used to build a GPU/OpenMP/OpenCV-optimized Darknet (Open Source Neural Networks in C) [https://pjreddie.com/darknet/] "You Only Look Once" (v3) [https://pjreddie.com/darknet/yolo/] processor using Python bindings [https://github.com/madhawav/YOLO3-4-Py].

The container image also has support for X11 display for interactive use, such that a user can call the provided runDocker.sh script from any location. That location will then be automatically mounted as /dmc and be accessible to the user as an interactive shell to perform quick code prototyping without needing to set up a complex environment. For example, using numpy and OpenCV to load and display on the user's X11 through the container, a picture can be found in the directory mounted as /dmc.

DMC has made the Dockerfile and supporting files, including usage and build instructions, publicly available on its GitHub at https://github.com/datamachines/cuda_tensorflow_opencv.

Building this container image is a long process; it was originally intended to be auto-built using Docker Hub from the repository provided on GitHub, but the original try took over 4h on Docker Hub and was automatically canceled by the build system. On a 2.8GHz quad core 7th gen i7, using a SSD, and running 8 concurrent make, it still takes 1h hour (per tag) to build. As such, we have made available on our Docker Hub the final builds for different tags, which can directly be used as FROM for your Docker containers at https://hub.docker.com/r/datamachines/cuda_tensorflow_opencv.

Previous
Previous

Federation matters: Introducing the NIST Cloud Federation Reference Architecture

Next
Next

USB Safety