Main Page: Difference between revisions

From gpu
Jump to navigation Jump to search
Rtorre (talk | contribs)
No edit summary
Rtorre (talk | contribs)
mNo edit summary
 
(15 intermediate revisions by the same user not shown)
Line 3: Line 3:
You can find below some general instructions to login and use the teogpu01 resource for machine learning studies.
You can find below some general instructions to login and use the teogpu01 resource for machine learning studies.
By now there are two GPUS:
By now there are two GPUS:
- Nvidia Tesla-V100 32Gb
- Nvidia Tesla-V100 32Gb
- Nvidia Tesla-V100S 32Gb
- Nvidia Tesla-V100S 32Gb


The machine is part of our Genova IT cluster, and belongs to Group-IV (teo).
The machine is part of our Genova IT cluster, and belongs to Group-IV (teo). All members of the teo group can access it at teogpu01. All members of other groups interested in trying or using the resource should contact the IT support.
All members of the teo group can access it at teogpu01. All members of other groups interested in trying or using the resource should contact the IT support.
   
   
To access the machine one has to first log into our frontend “linuxge.ge.infn.it" with
To access the machine one has to first log into our frontend “linuxge.ge.infn.it" with


ssh username@linuxge.ge.infn.it
<nowiki>ssh username@linuxge.ge.infn.it</nowiki>


Once in the frontend, the teogpu01 machine can be reached with
Once in the frontend, the teogpu01 machine can be reached with


ssh teogpu01
<nowiki>ssh teogpu01</nowiki>


The area in /mnt/project_mnt/teo_fs is a disk reserved to the theory group (Group IV) with some write permissions to members of other groups that need to use the GPU.
The area in /mnt/project_mnt/teo_fs is a disk reserved to the theory group (Group IV) with some write permissions to members of other groups that need to use the GPU.
Line 23: Line 24:


Moreover, the folder /mnt/project_mnt/teo_fs contains the following folders:
Moreover, the folder /mnt/project_mnt/teo_fs contains the following folders:
- anaconda3_ml: this contains a working installation of Anaconda 3 shared among all users as explained later;
- anaconda3_ml: this contains a working installation of Anaconda 3 shared among all users as explained later;
- Software/texlive: containing an installation of TeXLive 2020 (necessary, for instance, to use latex labels in Matplotlib);
- Software/texlive: containing an installation of TeXLive 2020 (necessary, for instance, to use latex labels in Matplotlib);
- Software/VSCode-linux-x64: containing Visual Studio Code.
- Software/VSCode-linux-x64: containing Visual Studio Code.


Line 30: Line 34:
I suggest you to add the following lines to your ~/.bashrc file:
I suggest you to add the following lines to your ~/.bashrc file:


\# Set TeXLive paths
<nowiki>
# Set TeXLive paths
export PATH=/mnt/project_mnt/teo_fs/Software/texlive/2020/bin/x86_64-linux:$PATH
export PATH=/mnt/project_mnt/teo_fs/Software/texlive/2020/bin/x86_64-linux:$PATH
export MANPATH=/mnt/project_mnt/teo_fs/Software/texlive/2020/texmf-dist/doc/man:$MANPATH
export MANPATH=/mnt/project_mnt/teo_fs/Software/texlive/2020/texmf-dist/doc/man:$MANPATH
export INFOPATH=/mnt/project_mnt/teo_fs/Software/texlive/2020/texmf-dist/doc/info:$INFOPATH
export INFOPATH=/mnt/project_mnt/teo_fs/Software/texlive/2020/texmf-dist/doc/info:$INFOPATH
\# Set VSCode path
# Set VSCode path
export PATH=/mnt/project_mnt/teo_fs/Software/VSCode-linux-x64/bin:$PATH
export PATH=/mnt/project_mnt/teo_fs/Software/VSCode-linux-x64/bin:$PATH
\# >>> conda initialize >>>
# >>> conda initialize >>>
\# !! Contents within this block are managed by 'conda init' !!
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/mnt/project_mnt/teo_fs/anaconda3_ml/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
__conda_setup="$('/mnt/project_mnt/teo_fs/anaconda3_ml/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
if [ $? -eq 0 ]; then
Line 49: Line 54:
fi
fi
unset __conda_setup
unset __conda_setup
\# <<< conda initialize <<<
# <<< conda initialize <<<
# Set path for CUDA libraries
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64/</nowiki>
 


After you add these just either source again the .bashrc with
After you add these just either source again the .bashrc with


source ~/.bashrc
<nowiki>source ~/.bashrc</nowiki>


or close and re-open the shell (log out and log in again).
or close and re-open the shell (log out and log in again).
Line 60: Line 68:
You can list available conda environments with 
You can list available conda environments with 


conda env list
<nowiki>conda env list</nowiki>
 
You will see that there are several environments, all starting with some username and all located in /mnt/project_mnt/teo_fs/anaconda3_ml/envs.


You will see that there are several environments, all starting with some username and all located in /mnt/project_mnt/teo_fs/anaconda3_ml/envs.
The rule we try to respect is: each user creates environments in the /mnt/project_mnt/teo_fs/anaconda3_ml/envs folder named “username_env_name”. To manage (create/delete/etc) environments just follows the instructions here:
The rule we try to respect is: each user creates environments in the /mnt/project_mnt/teo_fs/anaconda3_ml/envs folder named “username_env_name”. To manage (create/delete/etc) environments just follows the instructions here:


Line 69: Line 78:
Some users (from group I) experienced the problem that, creating environments simply by
Some users (from group I) experienced the problem that, creating environments simply by


conda create --name myenv
<nowiki>conda create --name myenv</nowiki>


the environment was created under their home folder. I believe it is better to avoid this, so, if you see that your environment was not created in /mnt/project_mnt/teo_fs/anaconda3_ml/envs, just delete it and recreate it using the --prefix option as follows:
the environment was created under their home folder. I believe it is better to avoid this, so, if you see that your environment was not created in /mnt/project_mnt/teo_fs/anaconda3_ml/envs, just delete it and recreate it using the --prefix option as follows:


conda create --prefix /mnt/project_mnt/teo_fs/anaconda3_ml/envs --name myenv
<nowiki>conda create --prefix /mnt/project_mnt/teo_fs/anaconda3_ml/envs --name myenv</nowiki>


The anaconda3_ml folder is configured in such a way that all environments are readable by anybody, so that anybody can activate all of them, but only the creator of each environment can modify it. This should simplify collaboration in the sense that if someone wants to run someone else’s code, he/she can do it using his/her environments, to maximize compatibility.
The anaconda3_ml folder is configured in such a way that all environments are readable by anybody, so that anybody can activate all of them, but only the creator of each environment can modify it. This should simplify collaboration in the sense that if someone wants to run someone else’s code, he/she can do it using his/her environments, to maximize compatibility.
Line 81: Line 90:
If anybody needs to access files on CERN eos (for instance stuff that is located and shared with the CERNbox), eos file system can be mounted with
If anybody needs to access files on CERN eos (for instance stuff that is located and shared with the CERNbox), eos file system can be mounted with


kinit user@CERN.CH
<nowiki>kinit user@CERN.CH</nowiki>


If anybody needs to use python interactively with Jupiter notebooks, it is possible to open the notebook on the remote machine and use it from a local browser (which makes the experience almost identical to just using Jupiter on your local machine). This can be done by forwarding ports as follows:
If anybody needs to use python interactively with Jupiter notebooks, it is possible to open the notebook on the remote machine and use it from a local browser (which makes the experience almost identical to just using Jupiter on your local machine). This can be done by forwarding ports as follows:
Line 87: Line 96:
From the local machine:
From the local machine:


login to the frontend with
<nowiki>ssh -J username@linuxge.ge.infn.it -L xxxx:localhost:yyyy username@teogpu01.ge.infn.it</nowiki>
 
ssh -L xxxx:localhost:yyyy username@linuxge.ge.infn.it
 
login to teogpu01 with
 
ssh -L yyyy:localhost:zzzz teogpu01


from teogpu01 launch a notebook without a browser
Now that you are logged on teogpu01 with port forwarding launch a notebook without a browser


jupyter notebook --no-browser —port=zzzz &
<nowiki>jupyter notebook --no-browser --port=yyyy &</nowiki>


on local machine open your browser at
on local machine open your browser at


http://localhost:zzzz
http://localhost:xxxx


and insert the token printed in the command line (or cut/paste full url with token)
and insert the token printed in the command line (or cut/paste full url with token)
Line 107: Line 110:
To list running Jupiter servers you can use
To list running Jupiter servers you can use


jupiter notebook list
<nowiki>jupyter notebook list</nowiki>


and to stop any server
and to stop any server


jupiter notebook stop port_number
<nowiki>jupyter notebook stop port_number</nowiki>


The xxxx, yyyy, zzzz above are three port numbers implementing the port forwarding. For instance something like xxxx=7002, yyyy=7001 and zzzz=7000. If ports are already in use you will get a warning message.
The xxxx, yyyy, zzzz above are three port numbers implementing the port forwarding. For instance something like xxxx=7002, yyyy=7001 and zzzz=7000. If ports are already in use you will get a warning message.
Line 117: Line 120:
Finally, to keep a good environment consistency, we suggest to use conda with the conda-forge channel to install packages whenever possible and only use pip when there is no other option. To use conda with conda-forge channel just do
Finally, to keep a good environment consistency, we suggest to use conda with the conda-forge channel to install packages whenever possible and only use pip when there is no other option. To use conda with conda-forge channel just do


conda install -c conda-forge your_package
<nowiki>conda install -c conda-forge your_package</nowiki>

Latest revision as of 10:47, 8 March 2023

Use of the teogpu01 resource

You can find below some general instructions to login and use the teogpu01 resource for machine learning studies. By now there are two GPUS:

- Nvidia Tesla-V100 32Gb

- Nvidia Tesla-V100S 32Gb

The machine is part of our Genova IT cluster, and belongs to Group-IV (teo). All members of the teo group can access it at teogpu01. All members of other groups interested in trying or using the resource should contact the IT support.

To access the machine one has to first log into our frontend “linuxge.ge.infn.it" with

ssh username@linuxge.ge.infn.it

Once in the frontend, the teogpu01 machine can be reached with

ssh teogpu01

The area in /mnt/project_mnt/teo_fs is a disk reserved to the theory group (Group IV) with some write permissions to members of other groups that need to use the GPU. In this area you can create your own folder calling it as your username.

These "user" folders should be made private (ask IT support for this).

Moreover, the folder /mnt/project_mnt/teo_fs contains the following folders:

- anaconda3_ml: this contains a working installation of Anaconda 3 shared among all users as explained later;

- Software/texlive: containing an installation of TeXLive 2020 (necessary, for instance, to use latex labels in Matplotlib);

- Software/VSCode-linux-x64: containing Visual Studio Code.

In order to use these softwares you have to add the corresponding paths to your .bashrc. I suggest you to add the following lines to your ~/.bashrc file:

# Set TeXLive paths
export PATH=/mnt/project_mnt/teo_fs/Software/texlive/2020/bin/x86_64-linux:$PATH
export MANPATH=/mnt/project_mnt/teo_fs/Software/texlive/2020/texmf-dist/doc/man:$MANPATH
export INFOPATH=/mnt/project_mnt/teo_fs/Software/texlive/2020/texmf-dist/doc/info:$INFOPATH
# Set VSCode path
export PATH=/mnt/project_mnt/teo_fs/Software/VSCode-linux-x64/bin:$PATH
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/mnt/project_mnt/teo_fs/anaconda3_ml/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
    eval "$__conda_setup"
else
    if [ -f "/mnt/project_mnt/teo_fs/anaconda3_ml/etc/profile.d/conda.sh" ]; then
        . "/mnt/project_mnt/teo_fs/anaconda3_ml/etc/profile.d/conda.sh"
    else
        export PATH="/mnt/project_mnt/teo_fs/anaconda3_ml/bin:$PATH"
    fi
fi
unset __conda_setup
# <<< conda initialize <<<
# Set path for CUDA libraries
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64/


After you add these just either source again the .bashrc with

source ~/.bashrc

or close and re-open the shell (log out and log in again).

Now, if everything works fine, you should automatically get the (base) environment activated. You can list available conda environments with 

conda env list

You will see that there are several environments, all starting with some username and all located in /mnt/project_mnt/teo_fs/anaconda3_ml/envs.

The rule we try to respect is: each user creates environments in the /mnt/project_mnt/teo_fs/anaconda3_ml/envs folder named “username_env_name”. To manage (create/delete/etc) environments just follows the instructions here:

https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html

Some users (from group I) experienced the problem that, creating environments simply by

conda create --name myenv

the environment was created under their home folder. I believe it is better to avoid this, so, if you see that your environment was not created in /mnt/project_mnt/teo_fs/anaconda3_ml/envs, just delete it and recreate it using the --prefix option as follows:

conda create --prefix /mnt/project_mnt/teo_fs/anaconda3_ml/envs --name myenv

The anaconda3_ml folder is configured in such a way that all environments are readable by anybody, so that anybody can activate all of them, but only the creator of each environment can modify it. This should simplify collaboration in the sense that if someone wants to run someone else’s code, he/she can do it using his/her environments, to maximize compatibility.

teogpu01 has htop and nvtop installed, two tools that allow to monitor processes on CPU and GPU respectively.

If anybody needs to access files on CERN eos (for instance stuff that is located and shared with the CERNbox), eos file system can be mounted with

kinit user@CERN.CH

If anybody needs to use python interactively with Jupiter notebooks, it is possible to open the notebook on the remote machine and use it from a local browser (which makes the experience almost identical to just using Jupiter on your local machine). This can be done by forwarding ports as follows:

From the local machine:

ssh -J username@linuxge.ge.infn.it -L xxxx:localhost:yyyy username@teogpu01.ge.infn.it

Now that you are logged on teogpu01 with port forwarding launch a notebook without a browser

jupyter notebook --no-browser --port=yyyy &

on local machine open your browser at

http://localhost:xxxx

and insert the token printed in the command line (or cut/paste full url with token)

To list running Jupiter servers you can use

jupyter notebook list

and to stop any server

jupyter notebook stop port_number

The xxxx, yyyy, zzzz above are three port numbers implementing the port forwarding. For instance something like xxxx=7002, yyyy=7001 and zzzz=7000. If ports are already in use you will get a warning message.

Finally, to keep a good environment consistency, we suggest to use conda with the conda-forge channel to install packages whenever possible and only use pip when there is no other option. To use conda with conda-forge channel just do

conda install -c conda-forge your_package