Docs Farm General Purpose

From Centro Calcolo
Jump to navigation Jump to search


The local INFN Computing Service maintains a general purpose farm for HTC (serial computing) based on x86/64/Linux architecture. The local farm is made by resources from all research groups, and implements a sharing configuration to optimize resource utilization.

Farm front end nodes

To access the local farm facility users need to login to one of the front-end nodes using an ssh client:

  • farmuisl6.ge.infn.it [Scientific Linux 6]
  • farmuisl7.ge.infn.it [CentOS 7]
  • linuxge.ge.infn.it [redirect to the most update login server]

Login server must be used to submit batch jobs, and can be used to perform short interactive jobs (compilation, test run). Front end nodes doesn't have a large amount of hardware resources and must not be used to execute long or intensive jobs.

Storage resources

Every node of the farm (front end or execution host) can access to the following shared storage resources:

  • /home/<username>/: home directory of users [daily backup available when size < 50 GB, don't use it for data]
  • /project/<group>/: storage areas owned by research groups [private]
  • /farmdisk1/: area available to users [usefull for input/output data]
    create a personal area in there: # mkdir /farmdisk1/<username>, protect it if needed: # chmod 700 /farmdisk1/<username>, and use it.
  • /project/software/: data area that store general purpose software (gcc, root, ... see details here) [readonly]

Every node dispose of a local storage area for temporary files:

  • /scratch

Please, try to maintain cleaned up this area even in case of job failure.

Queues

The local farm is maintained by a resource manager (currently: IBM LSF). Users can submit jobs using public or private (group) queues.

Available queues for HTC computing are:

  • long: available to all users, can use any host [limit: 48h elapsed time]
  • medium: available to all users, can use any host [limit: 4h elapsed time]
  • private group queues: aiace, g2, geant4, infne, lhcb, teo, totem
    available users: only members to the specific research group
    available host: only hosts oned by the sepcific research group
    resource limits: none

Use bqueues command to see all configured queues (some of them are for HPC purposes, and are not publicly available).

Job submission

Job submission must be executed using bsub command:

# bsub [-q <queue>] [-P <arch>] [-oo <outfile>] [-eo <errfile>] [-R <resource>] [-M <max mem>] <script-to-submit>
  • -q <queue>: specify the desired queue (default: medium)
  • -P <arch>: use if you want to specify execution host architecture
    available choices: sl6 for Scientific Linux 6, c7 for CentOS 7, default: none (run on any arch)
  • -oo <outfile> -eo <errfile>: specify output and error file, where to save stdout and stderr printings
  • -R <resource>: use it to specify resources needed by job, on startup and during the execution, to allow resource manager to select the right host to run the job and guarantee resource availability to the whole life of the job. Example:
    • -R "select[mem > 8192] rusage [mem = 8192]": candidate host must have at least 8 GB of RAM available, and the jobb will use 8 GB of RAM for the time of execution (so the resource manager can consider that amount of RAM allocated to your job).
    • -R "span[ptile=Nh]": used in conjunction of -n <N> usually for HPC jobs, specify that the resource manager must allocate N slot from one or more hosts, but at least Nh slot each host.
      • -n 4 -R "span[ptile=4]": the right way to specify that your job will spawn 4 subprocesses or threads on a single host
  • -M <max-mem>: specify that the job will use no more than <max-mem> MB of RAM; the resource manager will kill jobs that exceeds that amount of memory.
  • <script-to-submit>: can be an executable or (much better) a script that will execute all needed commmands.
    Warning: the script must be reachable in the PATH from the current working directory, and must be executable.

Memory size (in -R or -M options) must be specified in MB. Warning: public queues are configured with 1024 MB as default value for <max-mem>1024 MB. If you don't specify -M for your job, 1024 MB will be the limit.

More details on resource usage specification

  • -R "select[mem > 4096]": select candidate execution host from those that have 4 GB of RAM available when starting the job.
  • -R "rusage[mem = 8192]": after starting the job, consider 8 GB of RAM allocated for that job, so don't select the same host for jobs that plans to use an amount of memory incompatible with this allocation.

Memory amounts specified with -R are used only to permit the resource manager to choose the right execution host avoiding to run into low memory situations and eventually to trigger OOM killer on the execution hosts; those values are not considered a limit to be imposed to jobs.

  • -M <max-mem>: this define a limit of memory usage to the whole job; if a job exceeds this usage will be killed. This allow to prevent program bugs to cause an excess of memory usage and to impact on other jobs running on the same execution host.

Please, use alwais -R "select[mem > <M>] rusage[mem = <M>]" -M <M> to permit the resource manager a complete memory management.

Job, queue and host control

Users can use the following command to view and manage their jobs, and visualize hosts and queues statuses.

  • job status report: bjobs for pending/running jobs, bhist and bacct for completed jobs
  • job control: bkill (job termination), bstop and bresume (temporsry stop and restart), bmod (modification of job parameters)
  • queue status: bqueues
  • host status: bhosts and bmgroup, the latter to view host groups, defined by architecture (type_<arch>) or by research group (grp_<group>)

For all LSF commands, refer to online man pages (man <command>) and to the reference manual lsf_command_ref.pdf.


Available software

Compiler: gcc

You can find different versions of gcc compiler in in /project/software/gcc/<versione>/<distrib>/

Source the setup.sh or setup.csh that you can find in the root of the compiler version you need.
For example, if you want to use gcc 8.2.0 on CentOS7 release:

# source /project/software/gcc/8.2.0/x86_64-centos7/setup.sh

Compiler: Intel

You have different versions of Intel (icc/icpc/ifort) compiler available.
You can find them in /opt/intel directory. Use the usual source of starup file to activare a particular version of Intel compiler. For example:

# source /opt/intel/parallel_studio_xe_2019.4.070/psxevars.sh intel64

to activate Parallel Studio version 2019 on x86_64 architecture.


Root

You can find different version of ROOT in /project/software/root/<version>/<distrib>. Source the usual setup script to activate you desired version:

# source /project/software/root/6.18.00/x86_64-centos7-gcc48-opt/bin/thisroot.sh

to use ROOT 6.18 builded with gcc 4.8 on CentOS 7.

Mathematica

INFN Genova has a site license for Wolfram Mathematica package.
All front end and execution host nodes can run batch jobs using Mathematica. You do not need a particolar setup, you should inherit it by system startup files. Just try

# echo Quit | math

on a frontend node to see currently installed version.