noresm:noresm1-m_for_external_users

This is an old revision of the document!


NorESM1-M for external users

To request model access, please send a request to noresm-ncc@met.no with some background information on how you plan to use the model.


Untar NorESM1-M_sourceCode_vXXX.tar.zg in your home directory. This will install the model source code in the folder $HOME/NorESM1-M.

Untar NorESM1-M_inputdata_vXXX.tar.zg to a folder that is available during run-time and has at least 70 GB of free space. This will create a sub-folder inputdata and install NorESM's forcing data and other boundary conditions to it.

Untar NorESM1-M_initialPiHist1_vXXX.tar.zg to a folder that is available during run-time and has at least 2 GB of free space. This will install restart conditions for the CMIP5 PiControl and first member of historical experiment.


Define the characteristics of your HPC system in NorESM1-M/scripts/ccsm_utils/Machines/config_machines.xml.

Edit following section:

<machine MACH="vilje"
         DESC="NTNU, Trondheim, 16 pes/node, batch system is PBS"
         EXEROOT="/work/$CCSMUSER/noresm/$CASE"
         OBJROOT="$EXEROOT"
         INCROOT="$EXEROOT/lib/include" 
         DIN_LOC_ROOT_CSMDATA="/work/shared/noresm/inputdata"
         DIN_LOC_ROOT_CLMQIAN="UNSET"
         DOUT_S_ROOT="/work/$CCSMUSER/archive/$CASE"
         DOUT_L_HTAR="FALSE"
         DOUT_L_MSROOT="/norstore/project/norclim/noresm/cases/$CASE"
         DOUT_L_MSHOST="norstore-trd-app0.hpc.ntnu.no"
         CCSM_BASELINE="UNSET"
         CCSM_CPRNC="UNSET"
         OS="CNL"
         BATCHQUERY="qstat -f"
         BATCHSUBMIT="qsub" 
         GMAKE_J="4" 
         MAX_TASKS_PER_NODE="16"
         MPISERIAL_SUPPORT="FALSE" />

Set MACH to an acronym/name tag of your choice of your HPC system. Update DESC correspondingly.

Set EXEROOT to the root location where the model should create the build and run-directories. The model will replace $CCSMUSER with your unix user and $CASE with the specific case name (=simulation/experiment name).

Set DIN_LOC_ROOT_CSMDATA to the location where you installed the forcing and boundary condition data (this path should end with “/inputdata”).

Set DOUT_S_ROOT to the root location for the short-term archiving (normally on the work disk area of the HPC system).

Optionally, set DOUT_L_MSROOT to the root location for the long-term archiving (normally a location that is not subject to automatic deletion). Leave unchanged if you don't plan to use automatic long-term archiving (e.g., if you want to move the data manually).

Optionally, set DOUT_L_MSHOST to the name of the (remote)-server for long-term archiving. Leave unchanged if you don't plan to use automatic long-term archiving

Set BATCHSUBMIT to the submit command on your HPC system.

Set GMAKE_J to the number of make instances run in parallel when building the system. Set to 1 if licence or memory issues occur.

Set MAX_TASKS_PER_NODE to the maximum number of MPI tasks that can run on a single node on your HPC system (usually the same as the number of cores per node).

Define NorESM1-M CPU-configurations for your HPC system in NorESM1-M/scripts/ccsm_utils/Machines/config_pes.xml.

As a start, we recommend to simply replace all instances of vilje with your choice for MACH (i.e. the name tag of your machine).

Copy env_machopts.vilje to env_machopts.$MACH where $MACH is again the name of your machine.

Edit settings as necessary. The settings should make compilers, libraries, queuing system commands ect. available during building and model execution. Most likely, you will have to remove/replace the module specifications, while the runtime environment settings work for most systems.

This website uses cookies. By using the website, you agree with storing cookies on your computer. Also you acknowledge that you have read and understand our Privacy Policy. If you do not agree leave the website.More information about cookies
  • noresm/noresm1-m_for_external_users.1503500919.txt.gz
  • Last modified: 2022-05-31 09:23:24
  • (external edit)