26 April 2013

396. Compiling gromacs 4.6 with gpu support, openblas and fftw3 on debian wheezy

NOTE: with ACML my performance on my FX8150 and FX8350 nodes is only 25% of that with Openblas (double precision). Yes, for some reason gromacs is four times faster with openblas than with the machine vendor libraries in my tests.

Here are the release notes: http://www.gromacs.org/About_Gromacs/Release_Notes/Versions_4.6.x
As far as I understand you don't have to rely on openmm anymore for CUDA. Yes, the PITA of compiling openmm is gone!

Note that GPU calcs only speed things up under certain, specific conditions  -- and not all nvidia cards are supported (or equal). My own set-up, using statically cooled graphics cards, is definitely not appropriate for a GPU cluster. Once nwchem comes out with GPU support I might upgrade to fancier $200 graphics cards (maybe COSMO in NWChem will finally become more reasonable in terms of computational cost), but there's little reason for that at the moment.

Not all cards are created equal either -- e.g. GT210, which has GPU compute capability 1.2, is too poor to run with gromacs. GT430 (compute cap GT430) works. Both are obviously not viable for professional work.

Also note that it seems that you still need to use OPENMM if you want GPU support for implicit solvation.

Gromacs used to be easy to install. It's become a fair bit more complicated between 4.5.5 and 4.6. See here for gromacs 4.5.5: http://verahill.blogspot.com.au/2012/05/gromacs-with-external-fftw3-and-blas-on.html

CUDA: If you want to build with cuda you need gcc-4.6, which is still available in the wheezy repos. 4.7 won't work. Luckily, you can have both on your system, but you'll need to specify CC and CXX as shown below.

Openblas
Note that the links to the openblas file tends to die after a while, so you might have to download it manually.

sudo mkdir /opt/openblas
sudo chown ${USER} /opt/openblas
cd ~/tmp
wget http://github.com/xianyi/OpenBLAS/tarball/v0.2.6
tar xvf v0.2.6
cd xianyi-OpenBLAS-87b4d0c/
wget http://www.netlib.org/lapack/lapack-3.4.1.tgz
make all BINARY=64 CC=/usr/bin/gcc FC=/usr/bin/gfortran USE_THREAD=0 INTERFACE64=1 1> make.log 2>make.err
make PREFIX=/opt/openblas install
cp lib*.*  /opt/openblas/lib

add
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openblas/lib
to your ~/.bashrc [for later use with nwchem and ecce, add /opt/openblas/lib to /etc/ld.so.conf and do sudo ldconfig -- you might want to make libopenblas.so and libopenblas.so.0 sym links to the main lib, libopenblas_bulldozer-r0.2.6.so]

single-precision gromacs 4.6 with both CPU and GPU

CUDA
If you have an nvidia card and want to enable GPU calcs, do
sudo apt-get install nvidia-cuda-toolkit gcc-4.6 g++-4.6

If /usr/lib/libcuda.so is nothing by a symmlink to /usr/lib/libcuda.so.1, and the file /usr/lib/libcuda.so.1 is missing (this was the case on my wheezy amd64), then do
sudo rm /usr/lib/libcuda.so
sudo ln -s /usr/lib/x86_64-linux-gnu/libcuda.so.1 /usr/lib/libcuda.so

You can also simply make sure that there/s no /usr/lib/libcuda.so.

Continue with the gromacs compilation:
cd ~/tmp
sudo apt-get install cmake
wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6.tar.gz
tar xvf gromacs-4.6.tar.gz
mkdir build_gromacs46
cd build_gromacs46
sudo mkdir /opt/gromacs
sudo chown ${USER} /opt/gromacs
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/openmpi/lib:/opt/openblas/lib
export LDFLAGS="-L/opt/openblas/lib -lopenblas"
export CPPFLAGS="-I/opt/openblas/include"
export CC=/usr/bin/gcc-4.6 && export CXX=/usr/bin/g++-4.6 && cmake -DGMX_FFT_LIBRARY=fftw3 -DGMX_BUILD_OWN_FFTW=On -DGMX_DOUBLE=off -DCMAKE_INSTALL_PREFIX=/opt/gromacs/gromacs4.6_single -DGMX_EXTERNAL_BLAS=/opt/openblas/lib ../gromacs-4.6
make
make install

Note: for acml I used this instead:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/openmpi/lib:/opt/acml/acml5.2.0/gfortran64_fma4_int64/lib
export LDFLAGS="-L/opt/acml/acml5.2.0/gfortran64_fma4_int64/lib -lacml"
export CPPFLAGS="-I/opt/acml/acml5.2.0/gfortran64_fma4_int64/include"
export CC=/usr/bin/gcc-4.6 && export CXX=/usr/bin/g++-4.6 && cmake -DGMX_FFT_LIBRARY=fftw3 -DGMX_BUILD_OWN_FFTW=On -DGMX_DOUBLE=off -DCMAKE_INSTALL_PREFIX=/opt/gromacs/gromacs4.6_single -DGMX_EXTERNAL_BLAS=/opt/acml/acml5.2.0/gfortran64_fma4_int64/lib ../gromacs-4.6


Double-precision gromacs without GPU acceleration:

cd ~/tmp/build_gromacs46
rm * -rf
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/openmpi/lib:/opt/openblas/lib
export LDFLAGS="-L/opt/openblas/lib -lopenblas"
export CPPFLAGS="-I/opt/openblas/include"
export CC=/usr/bin/gcc-4.6 && export CXX=/usr/bin/g++-4.6 && cmake -DGMX_FFT_LIBRARY=fftw3 -DGMX_BUILD_OWN_FFTW=On -DGMX_DOUBLE=on -DGMX_GPU=off -DCMAKE_INSTALL_PREFIX=/opt/gromacs/gromacs4.6_double -DGMX_EXTERNAL_BLAS=/opt/openblas/lib ../gromacs-4.6
make
make install

Note: for acml I used this instead:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/openmpi/lib:/opt/acml/acml5.2.0/gfortran64_fma4_int64/lib
export LDFLAGS="-L/opt/acml/acml5.2.0/gfortran64_fma4_int64/lib -lacml"
export CPPFLAGS="-I/opt/acml/acml5.2.0/gfortran64_fma4_int64/include"
export CC=/usr/bin/gcc-4.6 && export CXX=/usr/bin/g++-4.6 && cmake -DGMX_FFT_LIBRARY=fftw3 -DGMX_BUILD_OWN_FFTW=On -DGMX_DOUBLE=on -DGMX_GPU=off -DCMAKE_INSTALL_PREFIX=/opt/gromacs/gromacs4.6_double -DGMX_EXTERNAL_BLAS=/opt/acml/acml5.2.0/gfortran64_fma4_int64/lib ../gromacs-4.6

Add gromacs to path:
echo 'export PATH=$PATH:/opt/gromacs/gromacs4.6_single/bin:/opt/gromacs/gromacs4.6_double/bin' >> ~/.bashrc

Switching between GPU and CPU
You can use the same binary for both, but remember that only the single precision binaries have GPU support to begin with. To set gpu vs cpu, use the -nb option in mdrun:
-nb enum auto Calculate non-bonded interactions on: auto, cpu, gpu or gpu_cpu


Quick test:
cd ~/tmp
wget http://www.gromacs.org/@api/deki/files/128/=gromacs-gpubench-dhfr.tar.gz
tar xvf \=gromacs-gpubench-dhfr.tar.gz
cd dhfr/GPU/dhfr-solv-PME.bench
mdrun -nb cpu -s topol.tpr -testverlet

Hit ctrl+c to stop and get statistics. Then try
mdrun -nb gpu -s topol.tpr -testverlet

I got
XPU ns/day -------------- auto 7.4 GPU 7.7 CPU 4.1 gpu_cpu 7.5

where I have a 3 core 3.1 GHz AMD Athlon II X3 445 CPU and an NVIDIA GeForce GT 430 graphics card -- neither of which is anything special.

Note also that the ns/day values depended highly on how long I let the calc run, and as I didn't time it and make them run the same amount of time, I suspect that auto, GPU and gpu_cpu are all about the same.

3 comments:

  1. Hi,

    Very useful. I cannot tell you how much of help this post was! I finally managed to install gromacs with gpu support in my cluster containing 2 Tesla K20s (http://www.nvidia.com/object/tesla-workstations.html). Do you have an idea of what kind of performance I should expect? I am getting ~70 ns/day for the dhfr benchmark you have suggested. Is this about right in gpu mode? Or am I missing something here, and not utilising my processors to the fullest?

    ReplyDelete
    Replies
    1. Kalai, I don't use the GPU version (or even gromacs much), so I'm not the right person to answer this.

      Have a look at http://www.gromacs.org/Documentation/Installation_Instructions_4.5/GROMACS-OpenMM#Benchmark_results.3a_GROMACS_CPU_vs_GPU

      They've supplied the input files so you should be able to run the benchmark yourself.

      Delete
  2. Thanks for your reply. I did run the PME testverlet from the link, getting a performance of ~70 ns/day.

    And I cannot directly compare my cluster's performance with the data given there because of 3 reasons: (i) The gromacs versions are different. The link provides simulations done in gromacs 4.5. I am using gromacs 4.6. Unfortunately, rigorous benchmarking for 4.6 gpu gromacs is not yet done. (ii) Of all the files given there, only the PME can be run in 4.6 GPU gromacs. Every other variation is not yet written for 4.6 version (iii) The cores are different. In the benchmark link, the fastest is GTX 580, having 512 cuda cores, netting ~18 ns/day. But I have 2 K20s, having 2496 cuda cores each.

    Anyways, I am thinking this is the kind of performance I should be looking at. Thanks and Regards,

    ReplyDelete