Bypass guillimin problems : Différence entre versions

Un article de Informaticiens département des sciences de la Terre et l'atmosphère
Aller à: navigation, charcher
m (When the model gets stuck)
m (When the model gets stuck)
Ligne 20: Ligne 20:
 
Unfortunately guillimin is not a very "stable" machine and jobs often crash or get stuck.  
 
Unfortunately guillimin is not a very "stable" machine and jobs often crash or get stuck.  
  
Therefore, for longer simulations, you shold be using '[[Chunk lance|Chunk_lance]]' instead of 'Um_lance' to run your simulation.
+
Therefore, for longer simulations, you shold be using '[[Chunk lance|Chunk_lance]]' instead of 'Um_lance' to run your simulation.  
 +
 
 +
When using 'Chunk_lance' each model job (month) can automatically get executed up tp 5 times before the whole chunk-job will stop.
 +
 
 +
Therefore, in case the model gets stuck, there is no need to delete the whole chunk-job with 'qdel' but rather try first the following:
 +
 
 +
Go on the main node on which the model is running (MasterHost). You find the name of this node in the last column when executing 'qs'. Go on this node with 'ssh': <br> <br>&nbsp; ssh $MasterHost <br> <br>Once on the node check which jobs are running with: <br> <br>&nbsp; ps -fu $USER <br> <br>Look for the 'mpiexec' job, something like: <br>&nbsp; mpiexec -npernode 4 --bind-to-core --cpus-per-proc 3 -n 36 ././POE_SCRIPT_25215 <br> <br>Then kill this job with 'kill -9 ' followed by the 'PID' of the mpiexec job: <br> <br>&nbsp; kill -9 $PID <br> <br>Then the month which got stuck should automatically restart from the beginning. <br> <br>Only if this does not work either, kill the whole job with the usual 'qdel'. <br>
  
 
= Avoiding trouble on guillimin<br>  =
 
= Avoiding trouble on guillimin<br>  =

Version depuis le 10 de février 2012 à 15:36

Good things to know about Guillimin

General

  • 'xrec' does not work on guillimin.
  • 'xxdiff' does not work on guillimin. You can use 'tkdiff' or 'tkdiff+' instead.

Queues

Jobs using less than 12 cores will go to the 'sw' queue.

Jobs using 12 cores or more will use full nodes (12 cores per node) only and go to the 'hb' queue. Meaning if a jobs gets submitted to run on 16 cores it will be running on 2 nodes = 24 cores.

One can force all jobs to go to a certain queue by exporting for example: export SOUMET_EXTRAS="-q hb"
SOUMET_EXTRAS can be exported in the batch profile '~/.profile.d/.batch_profile'.


When the model gets stuck

Unfortunately guillimin is not a very "stable" machine and jobs often crash or get stuck.

Therefore, for longer simulations, you shold be using 'Chunk_lance' instead of 'Um_lance' to run your simulation.

When using 'Chunk_lance' each model job (month) can automatically get executed up tp 5 times before the whole chunk-job will stop.

Therefore, in case the model gets stuck, there is no need to delete the whole chunk-job with 'qdel' but rather try first the following:

Go on the main node on which the model is running (MasterHost). You find the name of this node in the last column when executing 'qs'. Go on this node with 'ssh':

  ssh $MasterHost

Once on the node check which jobs are running with:

  ps -fu $USER

Look for the 'mpiexec' job, something like:
  mpiexec -npernode 4 --bind-to-core --cpus-per-proc 3 -n 36 ././POE_SCRIPT_25215

Then kill this job with 'kill -9 ' followed by the 'PID' of the mpiexec job:

  kill -9 $PID

Then the month which got stuck should automatically restart from the beginning.

Only if this does not work either, kill the whole job with the usual 'qdel'.

Avoiding trouble on guillimin

Here are a few tricks to bypass some of the machine's problems.


1) Have $TMPDIR under /localscratch

Instead of being under /tmp $TMPDIR will be under /localscratch on nodes where this file system exists (compute nodes):
  mkdir ~/tmp
  cd ~/tmp
  ln -s /localscratch guillimin
  ln -s /localscratch localhost

2) Make model kill itself when it gets stuck

Go into the directory in which you create your executables.

Copy the following file:
  cp /home/winger/gem/v_3.3.3/Abs/CORDEX/dead_process_timer.c .

Create the corresponding *.o:

  333

  r.make_exp

(Ignore the warnings:
WARNING: file clib_interface.cdk not found
WARNING: file pthread.h not found
WARNING: file stdio.h not found
WARNING: file stdlib.h not found
WARNING: file unistd.h not found)

  r.compile -src dead_process_timer.c
  mv dead_process_timer.o malibLinux_x86-64_pgi11xx

(You can also copy it:

  cp /home/winger/gem/v_3.3.3/Abs/CORDEX/malibLinux_x86-64_pgi11xx/dead_process_timer.o malibLinux_x86-64_pgi11xx 


Edit the routine 'gem_run.ftn'.
If you do not have it yet in your directory get it from the environment:
  333
  omd_exp gem_run.ftn

In 'gem_run.ftn', before the beginning of the time step loop:
        do istep = step0, stepf
add the line:
      call start_dead_process_timer(60)
At the beginning of each time step, just after the line:
        do istep = step0, stepf
add the line:
         call I_am_alive()

So you will end up haveing something like this:
:
      call itf_cpl_fillatm
*
      call start_dead_process_timer(60)
*
      do istep = step0, stepf
*
         call I_am_alive()
*
         Lctl_step = istep
:

Create the object file:
  make gem_run.o
and the model executable:
  make gemclimdm

Once you did this the model will kill itself when a new time step has not been calculated within the last 60 seconds.
If you think your model will usually take more than 60 sec to compute one time step increase the time in "call start_dead_process_timer(60)" from 60 to whatever you think is adequate.