A few important details (as of 12th October 2017):

  • When installing Antergos, do not choose to install NVIDIA proprietary drivers! It seems that current (nvidia-387) driver has got some problems with the system, at least on my (2 x GTX1080) configuration, which cause the screen to go black after boot.

After a few hours of experimentation, trying various solutions found on the web, I decided to do what everyone loves to do - install stuff by hand.

Why even bother with Antergos - Arch-based distribution? I was using Ubuntu/Mint for quite some time and over weeks I started to experience more and more strange bugs, so finally, making use of a few less hectic days, I decided to give a rolling disto a try.


We will start with installing CUDA (and Nvidia graphics driver by the way), afterwards we will connect cuDNN to it, install anaconda virtual environment, one for Tensorflow (built from sources) and one for Pytorch. With all of that, we should have a machine ready for a complete ML/DL experience.

This tutorial aims to provide quite detailed instructions on how to install the ML package but omits some Linux basic steps. It seems that many potential problems will depend on personal hardware configuration, so it’s quite hard to take into account all possibilities of what can happen during the process.

Disclaimer: I am by no means an Linux expert, I just like to make things work through experimentation, when they do not. If there are mistakes in the tutorial, I’ll be happy to correct them according to your suggestions.

Antergos Installation:

Basically, just download the Live ISO from Antergos site. I burned the image on DVD, as making Live USB resulted in boot problems - I could not boot from it (some people reported similar problems when using UEFI). Then, proceed through the installation - it’s quite easy, you can look at the tutorial. Just remember to choose NOT to install proprietary graphics driver.

CUDA 9.0:

Now, let’s get to CUDA installation. Set GCC compiler to version 6!

  • Install gcc6: yaourt -S gcc6

  • Install base development packages: pacman -S base-devel

  • Install linux headers: pacman -S linux-headers

  • Download: CUDA 9.0. Choose version for Ubuntu 17.04, x86_64, a runfile (local) one. I’m not sure if there will be a difference, as you’ll override default compiler setting anyway (if I am mistaken, please correct me!).

  • Blacklist Nouveau: During Antergos installation, if we choose not to install a proprietary driver, Nouveau drivers will be installed by default. In such case, we have to start with blacklisting them. This can be done by creating a blacklist.conf file in /etc/modprobe.d/ directory with your default text editor. This file should consist of following lines:

    blacklist nouveau
    options nouveau modeset=0

    Optional - Regenerate initramfs: You can probably regenerate your initramfs with:

    mkinitcpio -p linux

    When the file is created (and initramfs regenerated), reboot your computer.

  • Install CUDA: Now, when your computer is running again, you should have just the black screen. CTRL + ALT + F2 will launch a terminal, in which you should login and head into CUDA download directory. Default gcc compiler version for Antergos is 7.2.0, so we will have to change the default version to 6, in order to be able to install CUDA properly. This can be done in an elegant matter with a script I found, which enables you to change compiler versions. Let’s apply it with: sudo sh gcc-set-default-version 6 and install library with: sudo sh cuda_9.0.176_384.81_linux.run. Install the whole package - Nvidia driver, CUDA toolkit & samples. Create the symbolic link and run xconfig. When it’s done, reboot your machine.

    Into your ~/.bashrc file add following lines:

    export PATH=/usr/local/cuda-9.0/bin${PATH:+:${PATH}}
    export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

    and use source ~/.bashrc to reload the file.

  • Check Installation: Then head into samples directory, which location is set during installation (default ‘home/NVIDIA_CUDA-9.0_Samples/’), then into ‘1_Utilities/deviceQuery’, make the utility with sudo make and then run it with ./deviceQuery to see, if your GPU’s are properly detected. If CUDA has been installed properly, you should get an output similar to that one:

    Detected 2 CUDA Capable device(s)
    Device 0: "GeForce GTX 1080"
      CUDA Driver Version / Runtime Version          9.0 / 9.0
      CUDA Capability Major/Minor version number:    6.1
      Total amount of global memory:                 8114 MBytes (8508145664 bytes)
      (20) Multiprocessors, (128) CUDA Cores/MP:     2560 CUDA Cores
      GPU Max Clock rate:                            1734 MHz (1.73 GHz)
      Memory Clock rate:                             5005 Mhz
      Memory Bus Width:                              256-bit
      L2 Cache Size:                                 2097152 bytes
      Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
      Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
      Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
      Total amount of constant memory:               65536 bytes
      Total amount of shared memory per block:       49152 bytes
      Total number of registers available per block: 65536
      Warp size:                                     32
      Maximum number of threads per multiprocessor:  2048
      Maximum number of threads per block:           1024
      Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
      Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
      Maximum memory pitch:                          2147483647 bytes
      Texture alignment:                             512 bytes
      Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
      Run time limit on kernels:                     No
      Integrated GPU sharing Host Memory:            No
      Support host page-locked memory mapping:       Yes
      Alignment requirement for Surfaces:            Yes
      Device has ECC support:                        Disabled
      Device supports Unified Addressing (UVA):      Yes
      Supports Cooperative Kernel Launch:            Yes
      Supports MultiDevice Co-op Kernel Launch:      Yes
      Device PCI Domain ID / Bus ID / location ID:   0 / 3 / 0
      Compute Mode:
         < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
    Device 1: "GeForce GTX 1080"

    If you see this, then it means that CUDA is ready & working. That has been the hardest step, if you can see proper deviceQuery output, congratulations :) !

cuDNN 7.0:

  • Download: cuDNN 7.0

  • Installation: cuDNN Linux Installation Guide

    According to the installation guide: First, unzip the archive: tar -xzvf cudnn-9.0-linux-x64-v7.tgz .

    Then create a script with following commands:

    sudo cp cuda/include/cudnn.h /usr/local/cuda/include
    sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
    sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*

    and run it in the same directory (in which the .tgz file is). And that’s it!


  • Download: Anaconda 3.6, 64-bit (x86) Installer

  • Installation: Straightforward, just follow the prompt but do not add the anaconda install location to PATH in your ~/.bahsrc.

    In case, here’s the guide.

    Afterwards, head into /home/<your_username>/anaconda3/bin/ and run: ./conda update --all.

  • Virtual environments:

    Environments guide

    From /home/<your_username>/anaconda3/bin/ create two environments, one for Tensorflow and one for Pytorch using Intel Distribution (it comes with many packages and Intel Performance Libraries): First: conda config --add channels intel and then:

    • For Tensorflow, named ‘idp3’ - ./conda create -n idp3 intelpython3_full python=3

    but delete original tensorflow from the environment: pip uninstall tensorflow and update the packages: pipup <- if you have added optional shortcuts (information below).

    • For Pytorch, named ‘idp3torch’ clone the one made for TF - ./conda create --name idp3torch --clone idp3.

Optional - bash shortcuts:

Add those to your ~/.bashrc:

  • alias pipup='pip freeze --local | grep -v '^\-e' | cut -d = -f 1 | xargs -n1 pip install -U'
  • alias idp3='cd ~/anaconda3/bin/ && source activate idp3 && cd -'
  • alias idp3torch='cd ~/anaconda3/bin/ && source activate idp3torch && cd -'


Most of the instructions follow the TF from sources guide, so I’ll just list specific commands used to perform the installation.

  • Float128 ERROR: Most probably you will encounter float_128 error, which can be hacked by editing /usr/include/bits/floatn.h file and adding following lines:

    #undef __HAVE_FLOAT128
    #define __HAVE_FLOAT128 0

    after 34th line:

    # define __HAVE_FLOAT128 0

Installation process:

  • Clone repository: git clone https://github.com/tensorflow/tensorflow

  • Install bazel: yaourt -S bazel, choose jdk-8 package (default).

  • Configure TF intallation: ./configure in cloned tensorflow repository. Select /usr/bin/gcc-6 as GCC compiler (or maybe you’ve got it set up by default because of having used the script during CUDA installation, easy check gcc -v).

  • Build TF with CUDA support: bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

  • Build wheel file: bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

  • Install pip package into your environment: pip install --upgrade --ignore-installed /tmp/tensorflow_pkg/tensorflow-<click TAB to find your version>

Check TF installation: Create a tftest.py file with following content:

  import tensorflow as tf
  hello = tf.constant('Hello, TensorFlow!')
  sess = tf.Session()

Load your virtual env with idp3 (if alias was added to bash shortcut) and run the check: python tftest.py. Output signifying a success will show information about your devices, CUDA libraries being loaded and Hello TensorFlow should appear in the end.


In a way similar to Tensorflow’s installation, most of the steps are outlined in the Pytorch github repository.

Make sure that your default gcc version is 6.4.0, not >6 one!

  • Change to your idp3torch environemnt with idp3torch

  • Export CMAKE_PREFIX_PATH and install basic dependencies:

    export CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"
    conda install numpy pyyaml mkl setuptools cmake cffi

    do not install LAPACK support (magma cuda-80) !

  • Clone repository: git clone --recursive https://github.com/pytorch/pytorch

  • Install PyTorch: python setup.py install.

Optional - install torchvision & check:

  • Install torchvision: pip install torchvision

  • Download examples: git clone https://github.com/pytorch/examples

  • Check:

    • cd examples/mnist
    • python main.py if you’ll see decreasing loss & numbered epochs, it works!

If you managed to get so far, big congratulation! Now you should have a working ML & DL environment, installation of most of the standard packages should be a breeze.


Leave a Comment