Main Page

From gpu
Revision as of 11:00, 12 January 2022 by Rtorre (talk | contribs)
Jump to navigation Jump to search

Use of the teogpu01 resource

You can find below some general instructions to login and use the teogpu01 resource for machine learning studies. By now there are two GPUS:

- Nvidia Tesla-V100 32Gb

- Nvidia Tesla-V100S 32Gb

The machine is part of our Genova IT cluster, and belongs to Group-IV (teo). All members of the teo group can access it at teogpu01. All members of other groups interested in trying or using the resource should contact the IT support.

To access the machine one has to first log into our frontend “linuxge.ge.infn.it" with

ssh username@linuxge.ge.infn.it

Once in the frontend, the teogpu01 machine can be reached with

ssh teogpu01

The area in /mnt/project_mnt/teo_fs is a disk reserved to the theory group (Group IV) with some write permissions to members of other groups that need to use the GPU. In this area you can create your own folder calling it as your username.

These "user" folders should be made private (ask IT support for this).

Moreover, the folder /mnt/project_mnt/teo_fs contains the following folders:

- anaconda3_ml: this contains a working installation of Anaconda 3 shared among all users as explained later;

- Software/texlive: containing an installation of TeXLive 2020 (necessary, for instance, to use latex labels in Matplotlib);

- Software/VSCode-linux-x64: containing Visual Studio Code.

In order to use these softwares you have to add the corresponding paths to your .bashrc. I suggest you to add the following lines to your ~/.bashrc file:

# Set TeXLive paths
export PATH=/mnt/project_mnt/teo_fs/Software/texlive/2020/bin/x86_64-linux:$PATH
export MANPATH=/mnt/project_mnt/teo_fs/Software/texlive/2020/texmf-dist/doc/man:$MANPATH
export INFOPATH=/mnt/project_mnt/teo_fs/Software/texlive/2020/texmf-dist/doc/info:$INFOPATH
# Set VSCode path
export PATH=/mnt/project_mnt/teo_fs/Software/VSCode-linux-x64/bin:$PATH
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/mnt/project_mnt/teo_fs/anaconda3_ml/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
    eval "$__conda_setup"
else
    if [ -f "/mnt/project_mnt/teo_fs/anaconda3_ml/etc/profile.d/conda.sh" ]; then
        . "/mnt/project_mnt/teo_fs/anaconda3_ml/etc/profile.d/conda.sh"
    else
        export PATH="/mnt/project_mnt/teo_fs/anaconda3_ml/bin:$PATH"
    fi
fi
unset __conda_setup
# <<< conda initialize <<<
# Set path for CUDA libraries
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64/


After you add these just either source again the .bashrc with

source ~/.bashrc

or close and re-open the shell (log out and log in again).

Now, if everything works fine, you should automatically get the (base) environment activated. You can list available conda environments with 

conda env list

You will see that there are several environments, all starting with some username and all located in /mnt/project_mnt/teo_fs/anaconda3_ml/envs.

The rule we try to respect is: each user creates environments in the /mnt/project_mnt/teo_fs/anaconda3_ml/envs folder named “username_env_name”. To manage (create/delete/etc) environments just follows the instructions here:

https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html

Some users (from group I) experienced the problem that, creating environments simply by

conda create --name myenv

the environment was created under their home folder. I believe it is better to avoid this, so, if you see that your environment was not created in /mnt/project_mnt/teo_fs/anaconda3_ml/envs, just delete it and recreate it using the --prefix option as follows:

conda create --prefix /mnt/project_mnt/teo_fs/anaconda3_ml/envs --name myenv

The anaconda3_ml folder is configured in such a way that all environments are readable by anybody, so that anybody can activate all of them, but only the creator of each environment can modify it. This should simplify collaboration in the sense that if someone wants to run someone else’s code, he/she can do it using his/her environments, to maximize compatibility.

teogpu01 has htop and nvtop installed, two tools that allow to monitor processes on CPU and GPU respectively.

If anybody needs to access files on CERN eos (for instance stuff that is located and shared with the CERNbox), eos file system can be mounted with

kinit user@CERN.CH

If anybody needs to use python interactively with Jupiter notebooks, it is possible to open the notebook on the remote machine and use it from a local browser (which makes the experience almost identical to just using Jupiter on your local machine). This can be done by forwarding ports as follows:

From the local machine:

ssh -J username@linuxge.ge.infn.it -L xxxx:localhost:yyyy username@teogpu01.ge.infn.it

Now that you are logged on teogpu01 with port forwarding launch a notebook without a browser

jupyter notebook --no-browser —-port=yyyy &

on local machine open your browser at

http://localhost:xxxx

and insert the token printed in the command line (or cut/paste full url with token)

To list running Jupiter servers you can use

jupiter notebook list

and to stop any server

jupiter notebook stop port_number

The xxxx, yyyy, zzzz above are three port numbers implementing the port forwarding. For instance something like xxxx=7002, yyyy=7001 and zzzz=7000. If ports are already in use you will get a warning message.

Finally, to keep a good environment consistency, we suggest to use conda with the conda-forge channel to install packages whenever possible and only use pip when there is no other option. To use conda with conda-forge channel just do

conda install -c conda-forge your_package