Set up model environment : Différence entre versions
m (→Set up the model environment:) |
m (→Set up the model environment:) |
||
(31 révisions intermédiaires par le même utilisateur non affichées) | |||
Ligne 1: | Ligne 1: | ||
= Set up the model environment = | = Set up the model environment = | ||
− | You have to do this only once. <br>But before you set up the model environment on | + | You have to do this only once. <br>But before you set up the model environment on Compute Canada clusters, make sure you have set up the [[The SSM environment|SSM environment]]. If you have not done this already, click [[The SSM environment|here]] for instructions.<br>On the '''UQAM servers''' this has already been done during the creation of your account. |
=== Set up the model environment:<br> === | === Set up the model environment:<br> === | ||
− | #Allow 'ssh $TRUE_HOST' without typing password: <br> <span style="font-weight: bold;">cd ~/.ssh</span><br>Only if there is '''no(!)''' file 'id_rsa.pub' create it with:<br> <span style="font-weight: bold;"> ssh-keygen</span> (press just 'Enter' whenever asked a question, 3 times) <br>then<br> <span style="font-weight: bold;">cat id_rsa.pub >> authorized_keys</span><br><br> | + | #Allow 'ssh $TRUE_HOST' without typing password: <br> <span style="font-weight: bold;">cd ~/.ssh</span><br> '''If''' the directory ~/.ssh does not exist execute the command:<br> <span style="font-weight: bold;">ssh localhost</span><br> type in your password and once you are connected ''''exit'''' again. You should now have the directory ~/.ssh.<br>Only if there is '''no(!)''' file 'id_rsa.pub' create it with:<br> <span style="font-weight: bold;"> ssh-keygen</span> (press just 'Enter' whenever asked a question, 3 times) <br>then<br> <span style="font-weight: bold;">cat id_rsa.pub >> authorized_keys</span><br><br>'''chmod 644 ~/.ssh/authorized_keys<br><br>''' |
− | # | + | #Add keys to your ~/.ssh/config to not get kicked out from a session:<br><br>'''cat >> ~/.ssh/config << EOF<br>ForwardX11 no<br>stricthostkeychecking=no<br>ServerAliveInterval=15<br>ServerAliveCountMax=3<br>TCPKeepAlive=yes<br>UserKnownHostsFile=/dev/null<br>EOF<br><br>chmod 644 ~/.ssh/config<br><br>''' |
− | # | + | #Create a "host" with the name of the Compute Canada machine you are working on, pointing to 'localhost'<br>For example for '''Beluga''' create the following entry in your ~/.ssh/config (put lower case names):<br><br>'''cat >> ~/.ssh/config << EOF<br>#<br>Host beluga<br>Hostname localhost<br>EOF<br><br>''' |
− | + | #Set ‘umask 022’<br>On newer Compute Canada clusters, like Cedar and Beluga, by default, everybody within the same group can modify and remove your data under the project space. To prevent this from happening set:<br> '''umask 022'''<br>If you did step 1) above, this command will already be in your ~/.profile.d/.group_profile but you should add it to your file:<br> '''~/.profile.d/.batch_profile'''<br>Create the file if you do not have it.<br><br> | |
+ | #Set the core allocation name you want to use<br>Create a file in your HOME called:<br> '''~/.Account'''<br>This file has to contain the project name under which you want to run, nothing else. For users having their account via:<br> '''René Laprise''' or '''Julie Theriault''' this is:<br> '''rrg-laprise'''<br> '''Francesco Pausata''' this is:<br> '''rrg-pausata'''<br>For example:<br> $ cat ~/.Account<br> rrg-laprise<br><br> | ||
+ | #Create “'''storage_model'''”<br>When creating your executables the object files, extracted decks and executables will be put in a directory called ${storage_model}. A link will automatically be added to this place from the directory in which you create the executables.<br>The variable “storage_model“ must be set to a directory in which you have space (therefore not under your home). Because of the small quota for the number of files under the project spaces and because these files are rather small, set this variable to a place under the default (def-''professor'') space.<br> The are the usernames of our different professors on Compute Canada clusters: <br> René Laprise : '''laprise'''<br> Pierre Gauthier : '''gauthie2'''<br> Francesco Pausata: '''pausata'''<br> Julie Theriault : '''jtheriau'''<br> Alejandro Di Luca: '''adl561'''<br><br> First create the “storage_model“ directory with for example:<br> '''mkdir -p ~/projects/def-''professor''/${USER}/Storage_Model'''<br>(You need to replace ‘''professor''’ by the username of you professor.)<br><br>Then you need to export the variable 'storage_model', set to the directory you just created:<br> '''export storage_model=~/projects/def-''professor''/${USER}/Storage_Model'''<br>You need to export this variable in the following two profiles:<br> '''~/.profile.d/.interactive_profile'''<br> '''~/.profile.d/.batch_profile'''<br><br>Then you need to create the following symbolic link:<br> '''ln -s ~/.profile.d/.batch_profile ~/.profile.d/.ssh_profile'''<br><br> | ||
+ | #Create directories for running CRCM5:<br>Create the following directories/links in your home which must be links to a place outside your home:<br> '''~/MODEL_EXEC_RUN/${TRUE_HOST}'''<br> '''~/listings/${TRUE_HOST}'''<br>Unfortunately, the quota for the number of files of the project spaces on Beluga is very small. Therefore, please, link these two directories to your scratch space:<br> '''mkdir -p /scratch/${USER}/EXECDIR ~/MODEL_EXEC_RUN'''<br> '''ln -s /scratch/${USER}/EXECDIR ~/MODEL_EXEC_RUN/${TRUE_HOST}'''<br> '''mkdir -p /scratch/${USER}/Listings ~/listings'''<br> '''ln -s /scratch/${USER}/Listings ~/listings/${TRUE_HOST}'''<br>Since this is “scratch” space, all files in the above directories will get deleted. Read more about Compute Canada’s scratch policy on the web:<br> https://docs.computecanada.ca/wiki/Scratch_purging_policy<br><br> | ||
+ | #Open your directories for your group<br>If one day you like Michel Valin or myself to help you with anything on Beluga, it would be really helpful if you would give read and execute access to your group for your home, project spaces and scratch space. <br>For the project spaces you can simply use ‘chmod’:<br> '''chmod g+rx ~/projects/def-''professor''/${USER}'''<br> '''chmod g+rx ~/projects/rrg-''professor''/${USER}'''<br>For your home and scratch space (for which the ‘group’ is yours and not the project group) you can do this using ACLs. For example:<br> '''setfacl -m g:def-''professor'':r-x ~/.'''<br> '''setfacl -m g:def-''professor'':r-x /scratch/${USER}'''<br>As above, you need to replace ‘''professor''’ by the name of you professor.<br><br> | ||
<br> | <br> |
Version actuelle datée du 27 d'octobre 2020 à 15:59
Set up the model environment
You have to do this only once.
But before you set up the model environment on Compute Canada clusters, make sure you have set up the SSM environment. If you have not done this already, click here for instructions.
On the UQAM servers this has already been done during the creation of your account.
Set up the model environment:
- Allow 'ssh $TRUE_HOST' without typing password:
cd ~/.ssh
If the directory ~/.ssh does not exist execute the command:
ssh localhost
type in your password and once you are connected 'exit' again. You should now have the directory ~/.ssh.
Only if there is no(!) file 'id_rsa.pub' create it with:
ssh-keygen (press just 'Enter' whenever asked a question, 3 times)
then
cat id_rsa.pub >> authorized_keys
chmod 644 ~/.ssh/authorized_keys - Add keys to your ~/.ssh/config to not get kicked out from a session:
cat >> ~/.ssh/config << EOF
ForwardX11 no
stricthostkeychecking=no
ServerAliveInterval=15
ServerAliveCountMax=3
TCPKeepAlive=yes
UserKnownHostsFile=/dev/null
EOF
chmod 644 ~/.ssh/config - Create a "host" with the name of the Compute Canada machine you are working on, pointing to 'localhost'
For example for Beluga create the following entry in your ~/.ssh/config (put lower case names):
cat >> ~/.ssh/config << EOF
#
Host beluga
Hostname localhost
EOF - Set ‘umask 022’
On newer Compute Canada clusters, like Cedar and Beluga, by default, everybody within the same group can modify and remove your data under the project space. To prevent this from happening set:
umask 022
If you did step 1) above, this command will already be in your ~/.profile.d/.group_profile but you should add it to your file:
~/.profile.d/.batch_profile
Create the file if you do not have it. - Set the core allocation name you want to use
Create a file in your HOME called:
~/.Account
This file has to contain the project name under which you want to run, nothing else. For users having their account via:
René Laprise or Julie Theriault this is:
rrg-laprise
Francesco Pausata this is:
rrg-pausata
For example:
$ cat ~/.Account
rrg-laprise - Create “storage_model”
When creating your executables the object files, extracted decks and executables will be put in a directory called ${storage_model}. A link will automatically be added to this place from the directory in which you create the executables.
The variable “storage_model“ must be set to a directory in which you have space (therefore not under your home). Because of the small quota for the number of files under the project spaces and because these files are rather small, set this variable to a place under the default (def-professor) space.
The are the usernames of our different professors on Compute Canada clusters:
René Laprise : laprise
Pierre Gauthier : gauthie2
Francesco Pausata: pausata
Julie Theriault : jtheriau
Alejandro Di Luca: adl561
First create the “storage_model“ directory with for example:
mkdir -p ~/projects/def-professor/${USER}/Storage_Model
(You need to replace ‘professor’ by the username of you professor.)
Then you need to export the variable 'storage_model', set to the directory you just created:
export storage_model=~/projects/def-professor/${USER}/Storage_Model
You need to export this variable in the following two profiles:
~/.profile.d/.interactive_profile
~/.profile.d/.batch_profile
Then you need to create the following symbolic link:
ln -s ~/.profile.d/.batch_profile ~/.profile.d/.ssh_profile - Create directories for running CRCM5:
Create the following directories/links in your home which must be links to a place outside your home:
~/MODEL_EXEC_RUN/${TRUE_HOST}
~/listings/${TRUE_HOST}
Unfortunately, the quota for the number of files of the project spaces on Beluga is very small. Therefore, please, link these two directories to your scratch space:
mkdir -p /scratch/${USER}/EXECDIR ~/MODEL_EXEC_RUN
ln -s /scratch/${USER}/EXECDIR ~/MODEL_EXEC_RUN/${TRUE_HOST}
mkdir -p /scratch/${USER}/Listings ~/listings
ln -s /scratch/${USER}/Listings ~/listings/${TRUE_HOST}
Since this is “scratch” space, all files in the above directories will get deleted. Read more about Compute Canada’s scratch policy on the web:
https://docs.computecanada.ca/wiki/Scratch_purging_policy - Open your directories for your group
If one day you like Michel Valin or myself to help you with anything on Beluga, it would be really helpful if you would give read and execute access to your group for your home, project spaces and scratch space.
For the project spaces you can simply use ‘chmod’:
chmod g+rx ~/projects/def-professor/${USER}
chmod g+rx ~/projects/rrg-professor/${USER}
For your home and scratch space (for which the ‘group’ is yours and not the project group) you can do this using ACLs. For example:
setfacl -m g:def-professor:r-x ~/.
setfacl -m g:def-professor:r-x /scratch/${USER}
As above, you need to replace ‘professor’ by the name of you professor.