Spinning a Docker Container with Cuda Enabled TensorFlow

Raw Text

Follow

Dec 10, 2021

Dec 10, 2021

·

·

If you are a Deep Learning Practioner you must know the pain of setting up Cuda enabled Tensorflow/PyTorch for your new project. The process involves installing Nvidia, Cuda, cuDNN, and Tensorflow/PyTorch each of these libraries is a hurdle towards the main project. At times only installation consumes a lot of the time.

All the aforementioned hurdles can be avoided using Docker.

Docker is an open source containerization platform. It enables developers to package applications into containers — standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment.

Docker is an open source containerization platform. It enables developers to package applications into containers — standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment.

Docker Hub is a portal where you can search for various Docker Images. With reference to this article, we can find Nvidia-Docker2, TensorFlow (GPU/CPU Versions), and PyTorch(GPU/CPU) Docker Images on the portal.

Nvidia Driver Installation (Optional)

In this article, we will set up a TensorFlow Docker Container that utilizes GPU from the host system.

Before setting up docker you need to install Nvidia Drivers on the Host System and if you already have the drivers set up then the step is optional .

To find the compatible drivers for the GPU type:

ubuntu-drivers devices

Once you have the list of all the drivers that are compatible install any one of them using the command:

sudo apt-get install nvidia-driver-470-server

Once the drivers are installed verify the installation using the command:

nvidia-smi

Docker Installation

Update the apt package index and install packages to allow apt to use a repository over HTTPS:

sudo apt-get update sudo apt-get install \ ca-certificates \ curl \ gnupg \ lsb-release

Add Docker’s official GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Use the following command to set up the stable repository. To add the nightly or test repository, add the word nightly or test (or both) after the word stable in the commands below.

echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update the apt package index, and install the latest version of Docker Engine and contained:

sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io

Verify that Docker Engine is installed correctly by running the hello-world image.

sudo docker run hello-world

Nvidia-Docker2 Installation

Once we have docker installed we then need to set up Nvidia Docker that helps us use the Cuda on Docker Container.

The steps for installation are as follows. We first update the packages:

sudo apt-get update

Next, we install the nvidia-docker2 packages using apt-get:

sudo apt-get install -y nvidia-docker2

Once the nvidia-docker2 is installed we then restart the docker service to update the changes:

sudo systemctl restart docker

Last we test whether the GPU was configured for usage inside the Docker Container:

sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi

TensorFlow Docker Container

We can run the TensorFlow Docker container by specifying the TensorFlow version that we need. For now, we will only be running the print statement to check if the GPU is available inside the docker container.

docker run — gpus all -it — rm tensorflow/tensorflow:latest-gpu python -c “import tensorflow as tf; print(tf.test.is_gpu_available())”

Further details about TensorFlow Docker containers can be seen on the Docs Page.

References

https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker

https://docs.docker.com/engine/install/ubuntu/

https://www.tensorflow.org/install/docker

https://hub.docker.com/

About Author

LinkedIn: https://pk.linkedin.com/in/nauyan

GitHub: https://github.com/nauyan

Single Line Text

Follow. Dec 10, 2021. Dec 10, 2021. · · If you are a Deep Learning Practioner you must know the pain of setting up Cuda enabled Tensorflow/PyTorch for your new project. The process involves installing Nvidia, Cuda, cuDNN, and Tensorflow/PyTorch each of these libraries is a hurdle towards the main project. At times only installation consumes a lot of the time. All the aforementioned hurdles can be avoided using Docker. Docker is an open source containerization platform. It enables developers to package applications into containers — standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment. Docker is an open source containerization platform. It enables developers to package applications into containers — standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment. Docker Hub is a portal where you can search for various Docker Images. With reference to this article, we can find Nvidia-Docker2, TensorFlow (GPU/CPU Versions), and PyTorch(GPU/CPU) Docker Images on the portal. Nvidia Driver Installation (Optional) In this article, we will set up a TensorFlow Docker Container that utilizes GPU from the host system. Before setting up docker you need to install Nvidia Drivers on the Host System and if you already have the drivers set up then the step is optional . To find the compatible drivers for the GPU type: ubuntu-drivers devices. Once you have the list of all the drivers that are compatible install any one of them using the command: sudo apt-get install nvidia-driver-470-server. Once the drivers are installed verify the installation using the command: nvidia-smi. Docker Installation. Update the apt package index and install packages to allow apt to use a repository over HTTPS: sudo apt-get update sudo apt-get install \ ca-certificates \ curl \ gnupg \ lsb-release. Add Docker’s official GPG key: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg. Use the following command to set up the stable repository. To add the nightly or test repository, add the word nightly or test (or both) after the word stable in the commands below. echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null. Update the apt package index, and install the latest version of Docker Engine and contained: sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io. Verify that Docker Engine is installed correctly by running the hello-world image. sudo docker run hello-world. Nvidia-Docker2 Installation. Once we have docker installed we then need to set up Nvidia Docker that helps us use the Cuda on Docker Container. The steps for installation are as follows. We first update the packages: sudo apt-get update. Next, we install the nvidia-docker2 packages using apt-get: sudo apt-get install -y nvidia-docker2. Once the nvidia-docker2 is installed we then restart the docker service to update the changes: sudo systemctl restart docker. Last we test whether the GPU was configured for usage inside the Docker Container: sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi. TensorFlow Docker Container. We can run the TensorFlow Docker container by specifying the TensorFlow version that we need. For now, we will only be running the print statement to check if the GPU is available inside the docker container. docker run — gpus all -it — rm tensorflow/tensorflow:latest-gpu python -c “import tensorflow as tf; print(tf.test.is_gpu_available())” Further details about TensorFlow Docker containers can be seen on the Docs Page. References. https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker. https://docs.docker.com/engine/install/ubuntu/ https://www.tensorflow.org/install/docker. https://hub.docker.com/ About Author. LinkedIn: https://pk.linkedin.com/in/nauyan. GitHub: https://github.com/nauyan.