noresm:noresm1-m_for_external_users

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
noresm:noresm1-m_for_external_users [2017-08-23 15:48:22]
ingo.bethke@gmail.com
noresm:noresm1-m_for_external_users [2022-05-31 09:29:32] (current)
Line 1: Line 1:
 ====== NorESM1-M for external users ====== ====== NorESM1-M for external users ======
  
 +----
 ===== Access ===== ===== Access =====
  
-To request model access, please send a request to **noresm-ncc@met.no** with some background information on how you plan to use the model. +To request model access, please send a request to //noresm-ncc@met.no// with some background information on how you plan to use the model. 
  
  
Line 10: Line 10:
 ===== Installing ===== ===== Installing =====
  
-Untar **NorESM1-M_sourceCode_vXXX.tar.zg** in your home directory. This will install the model source code in the folder **$HOME/NorESM1-M**+Untar //NorESM1-M_sourceCode_vXXX.tar.zg// in your home directory. This will install the model source code in the folder //$HOME/NorESM1-M//
  
-Untar **NorESM1-M_inputdata_vXXX.tar.zg** to a folder that is available during run-time and has at least 70 GB of free space. This will create a sub-folder inputdata and install NorESM's forcing data and other boundary conditions to it. +Untar //NorESM1-M_inputdata_vXXX.tar.zg// to a folder that is available during run-time and has at least 70 GB of free space. This will create a sub-folder inputdata and install NorESM's forcing data and other boundary conditions to it. 
  
-Untar **NorESM1-M_initialPiHist1_vXXX.tar.zg** to a folder that is available during run-time and has at least 2 GB of free space. This will install restart conditions for the CMIP5 PiControl and first member of historical experiment. +Untar //NorESM1-M_initialPiHist1_vXXX.tar.zg// to a folder that is available during run-time and has at least 2 GB of free space. This will install restart conditions for the CMIP5 PiControl and first member of historical experiment. 
  
 ---- ----
Line 21: Line 21:
 ==== Machine name and characteristics ==== ==== Machine name and characteristics ====
  
-Define the characteristics of your HPC system in **NorESM1-M/scripts/ccsm_utils/Machines/config_machines.xml**.+Define the characteristics of your HPC system in //NorESM1-M/scripts/ccsm_utils/Machines/config_machines.xml//.
  
 Edit following section:  Edit following section: 
Line 46: Line 46:
 </code> </code>
  
-Set **MACH** to an acronym/name tag of your choice of your HPC system. Update **DESC** correspondingly.+Set //MACH// to an acronym/name tag of your choice of your HPC system. Update //DESC// correspondingly.
  
-Set **EXEROOT** to the root location where the model should create the build and run-directories. The model will replace **$CCSMUSER** with your unix user and **$CASE** with the specific case name (=simulation/experiment name).      +Set //EXEROOT// to the root location where the model should create the build and run-directories. The model will replace //$CCSMUSER// with your unix user and //$CASE// with the specific case name (=simulation/experiment name).      
  
-Set **DIN_LOC_ROOT_CSMDATA** to the location where you installed the forcing and boundary condition data (this path should end with "/inputdata").  +Set //DIN_LOC_ROOT_CSMDATA// to the location where you installed the forcing and boundary condition data (this path should end with "/inputdata").  
  
-Set **DOUT_S_ROOT** to the root location for the short-term archiving (normally on the work disk area of the HPC system). +Set //DOUT_S_ROOT// to the root location for the short-term archiving (normally on the work disk area of the HPC system). 
  
-Optionally, set **DOUT_L_MSROOT** to the root location for the long-term archiving (normally a location that is not subject to automatic deletion). Leave unchanged if you don't plan to use automatic long-term archiving (e.g., if you want to move the data manually). +Optionally, set //DOUT_L_MSROOT// to the root location for the long-term archiving (normally a location that is not subject to automatic deletion). Leave unchanged if you don't plan to use automatic long-term archiving (e.g., if you want to move the data manually). 
  
-Optionally, set **DOUT_L_MSHOST** to the name of the (remote)-server for long-term archiving. Leave unchanged if you don't plan to use automatic long-term archiving +Optionally, set //DOUT_L_MSHOST// to the name of the (remote)-server for long-term archiving. Leave unchanged if you don't plan to use automatic long-term archiving 
  
-Set **BATCHSUBMIT** to the submit command on your HPC system.+Set //BATCHSUBMIT// to the submit command on your HPC system.
  
-Set **GMAKE_J** to the number of make instances run in parallel when building the system. Set to 1 if licence or memory issues occur. +Set //GMAKE_J// to the number of make instances run in parallel when building the system. Set to 1 if licence or memory issues occur. 
  
-Set **MAX_TASKS_PER_NODE** to the maximum number of MPI tasks that can run on a single node on your HPC system (usually the same as the number of cores per node). +Set //MAX_TASKS_PER_NODE// to the maximum number of MPI tasks that can run on a single node on your HPC system (usually the same as the number of cores per node). 
  
  
 ==== CPU configurations ==== ==== CPU configurations ====
  
-Define NorESM1-M CPU-configurations for your HPC system in **NorESM1-M/scripts/ccsm_utils/Machines/config_pes.xml**+Define NorESM1-M CPU-configurations for your HPC system in //NorESM1-M/scripts/ccsm_utils/Machines/config_pes.xml//
  
-As a start, we recommend to simply replace all instances of **vilje** with your choice for **MACH** (i.e. the name tag of your machine). +As a start, we recommend to simply replace all instances of //vilje// with your choice for //MACH// (i.e. the name tag of your machine). 
  
  
 ==== Building and runtime options ==== ==== Building and runtime options ====
  
-Copy **env_machopts.vilje** to **env_machopts.$MACH** where $MACH has to be replaced with the name of your machine.+Copy //env_machopts.vilje// to //env_machopts.$MACH// where //$MACH// has to be replaced with the name of your machine.
  
-Edit settings as necessary. The settings should make compilers, libraries, queuing system commands ect. available during building and model execution. You likely have to remove/replace the module specifications, while the runtime environment settings work for most systems.    +Edit settings as necessary. The settings should make compilers, libraries, queuing system commands ect. available during building and model execution. You will likely have to remove/replace all module specifications, while the runtime environment settings should work for most systems.    
  
 ==== Compiler and linker options ==== ==== Compiler and linker options ====
  
-Copy **Macros.vilje** to **Macros.$MACH** if your compiler is intel. If your compiler is pgi, then instead use **Macros.hexagon** as template.  +Copy //Macros.vilje// to //Macros.$MACH// if your compiler is intel. If your compiler is pgi, then instead use //Macros.hexagon// as template.  
  
 Following lines need to be customised:  Following lines need to be customised: 
Line 95: Line 95:
 </code> </code>
  
-==== Batch/queuing system rules for submit script ====+==== Batch/queuing system rules ====
  
-Copy **mkbatch.vilje** to **mkbatch.$MACH**+Copy //mkbatch.vilje// to //mkbatch.$MACH//
  
-Change as necessary the settings for **mach****group_id****mppsize****mppnodes**, etc. +Change as necessary the settings for //mach////group_id////mppsize////mppnodes//, etc. 
  
 Replace the line ''mpiexec_mpt -stats -n $mpptotal omplace -nt ${maxthrds} ./ccsm.exe >&! ccsm.log.\$LID'' with the submit command and appropriate options for your HPC system. Replace the line ''mpiexec_mpt -stats -n $mpptotal omplace -nt ${maxthrds} ./ccsm.exe >&! ccsm.log.\$LID'' with the submit command and appropriate options for your HPC system.
 +
 +----
 +===== Testing =====
 +
 +We suggest to test the installation by running a pre-industrial spin-up experiment that is configured as 'start-up' simulation (i.e. uses idealised conditions and observational climatologies as initial conditions). 
 +
 +Change directory to //$HOME/NorESM1-M/scripts//
 +
 +Set up pre-industrial spin-up simulation with 
 +''./create_newcase -case ../cases/NorESM1-M_PI_spinup -compset NAER1850CN -res f19_g16 -mach $MACH -pecount S'' 
 +where //$MACH// is the name tag of your machine. 
 +
 +Change directory to //../cases/NorESM1-M_PI_spinup// 
 +
 +Run ''.configure -case'' to set up simulation. 
    
 +Run ''./NorESM1-M_PI_spinup.$MACH.build'' to stage namelists in run directory and build model code.   
 +
 +Submit run-script //./NorESM1-M_PI_spinup.$MACH.run// (use appropriate submit command for your HPC system, e.g. qsub).
  
 +If the test was successful then the model should run for 5 days and write our restart conditions at the beginning of the 6th day. The location of the run directory is specified via EXEROOT in config_machines.xml (see section Porting).  
  
 +===== Resources =====
  
 +  * [[noresm:runmodel:newbie|Quickstart for running NorESM1]]
 +  * [[noresm:runmodel:advanced|Advanced options for running NorESM1]]
 +  * [[http://www.teriin.org/projects/nfa/pdf/Working_paper7.pdf|Porting of NorESM1 to TERI (India)]]
  • noresm/noresm1-m_for_external_users.1503503302.txt.gz
  • Last modified: 2022-05-31 09:23:24
  • (external edit)