Default Resources

SLURM Cluster

48 broadwell nodes : node101... node148 (each 64 threads, 256GB of memory)

2 SMP nodes : genosmp02 (96 threads, 1.5TB of memory), genosmp03 (192 threads,3TB of memory

1 Visu node : genoview (64 threads, 128GB of memory, Graphic Card: Nvidia K40)

/home/user: 5GB available to store your configuration files.
/work/user: 1TB available as working directory. You have read/write access from any cluster node. Files are automaticly deleted if they have not been accessed within the last 120 days (to know them: find repertoire/ -atime +120).
/save/user: 250GB available for data you want to save with 30 retention days. You have read only access on this directory from any cluster nodes.


If you need more space in /work or in /save you are invited to fill the resources request form.


/usr/local/bioinfo/src/: directory gathering all bioinformatic software (see Software FAQ)
/bank: biological banks in different format (see Databanks FAQ)

On /work:

mmlsquota -u username --block-size G

On /save and /home:

du -csh --apparent-size /save/username/ ( /save/username/* for details)
du -csh --apparent-size /home/username/ ( /home/username/* for details)

Academic account quota: 100 000 h/per calendar year
Beyond these 100,000 hours, you will need to submit a science project (by the resources request form) to estimate the real needs of the bioinformatics environment.

According to results from this evaluation, but also their geographical and institutional origin, users can then either continue their treatments or be invited to contribute financially to infrastructure, or be redirected to regional or national mésocentres calculation.

Non-academic account quota:  500 h/per calendar year for testing the infrastructure.
Overtime calculation will be charged (price on request).

 

To know your quota, use the command:
squota_cpu

Without any parameters, on any queue, all jobs are limited to:

  • 2GB (memory)
  • 1 CPU (thread)

It depends on the status of your Linux group (contributors, INRA and/or REGION, others).

Max slotsworkq
(group)
workq
(user)
unlimitq
(all users sum)
unlimitq
(user)
Contributors5036768500125
INRA/Region378057650094
Others125819250031

To kown the status and the limits of your account:

saccount_info login (see "Status of your Linux primary group in Slurm" field)

It depends on the status of your Linux group (contributors, INRA and/or REGION, others).

 

Max mem in Gworkq
(group)
workq
(user)
unlimitq
(all users sum)
unlimitq
(user)
Contributors34T4T2T500G
INRA/Region26T3T2T376G
Others9T1.5T2T124G

To kown the status and the limits of your account:

saccount_info login (see "Status of your Linux primary group in Slurm" field)

Max jobs per user2500
Max job for all users10000
Max task array per job2501

Slurm PartitionMaxTime
workq96H (4 days)
interq48H (2 days)
unlimitq, smpq180 days (6 months)

Useful account informations

saccount_info login command will give you some useful informations of your account like:

  • account expiration date and last password change date ( every year)
  • your primary Linux group
  • your secondary Linux groups if you have some
  • status of your Linux primary group in Slurm (contributors, inraregion or others)
  • your groups' members
  • some Slurm limitations of your account

squeue long format

sq_long : squeue verbose with detail for: JOBID NAME USER QOS PARTITION NODES CPUS MIN_MEMORY TIME_LIMIT TIME_LEFT STATE NODELIST REASON. Can take all squeue options. See --help option for help.

sq_debug : squeue verbose for debug. Show COMMAND and WORKDIR for a job. Can take all squeue options. See --help option for help.

sq_run : sq_debug for running jobs. Can take all squeue options. See --help option for help.

sq_pend : sq_debug for pending jobs. Can take all squeue options. See --help option for help.

sacct long format

sa_debug : sacct verbose. Can take all sacct options. See --help option for help.