The module for this version is contained in the chemistry modules, so you will need to load the module set that contains it before it is visible in the modules list: ```
module load chemistry-modules
CP2K should then be visible as:
----- /shared/ucl/depts/chemistry/modulefiles -----
### Submitter An automatic submitter is available for this version, available by loading the submission scripts module from the chemistry set: ``` `module load chemistry-modules` `module load submission-scripts` ``` The alias "submitters" will then list the submitters available. The "cp2k.submit" submitter takes up to 5 arguments, and any omitted will be asked for interactively:
cp2k.submit «input_file» «cores» «maximum_run_time» «memory_per_core» «job_name»
So, for example:
cp2k.submit water.inp 8 2:00:00 4G mywatermolecule
``` would request a job running CP2K with the input file "water.inp", on 8 cores, with a maximum runtime of 2 hours, with 4 gigabytes of memory per core, and a job name of "mywatermolecule".
As with the previous versions, some extra arguments to the OpenMPI mpirun command are necessary. An example job script is below:
```# Set your project name here:```
```# If necessary, alter the maximum quantity of memory here:```
```# Alter the maximum run time here (hours:minutes:seconds)```
```# Alter the number of processors here:```
```# loading the correct modules```
```#Modify this to name your input file```
```mpirun --mca btl ^tcp -n $NSLOTS cp2k.popt $InputFile > $OutputFile```
branch 2_1 and trunk§
To use these versions of CP2K on Legion, you will have to make considerable changes to your user environment, including using a test build of OpenMPI specifically designed for this purpose.
You also need to pick which CP2K package you wish to use: trunk, or branch 2_1 - the jobscript below selects branch 2_1 - if you require trunk, you will need to change the module specification.
In order to use this special version of OpenMPI, you need to make a few changes to the normal OpenMPI job script. An example job script is shown below:
```# 1. Force bash as the executing shell.```
```# 2. Request ten minutes of wallclock time (format hours:minutes:seconds).```
```# 3. Request 1 gigabyte of RAM.```
```# 4. Set the name of the job.```
```# 5. Select the OpenMPI parallel environment and 8 processors.```
```# 6. Select the project that this job will run under.```
```# 7. Set the working directory to somewhere in your scratch space. This is```
```# 8. Run our MPI job. ```
```module add compilers/gnu/4.4.0```
```# Delete OpenMPI PE SSH wrapper```
```# Need to add in --prefix $MPI_HOME as not using system OpenMPI```
The main differences between this script and the normal OpenMPI scripts is that we have to delete an SSH wrapper from $TMPDIR/ssh and insert --prefix $MPI_HOME into our mpirun command. Note that we've also forced the loading of the correct modules in the job script. This assumes you have an input file called C.inp in your working directory. Amend this script as necessary.
If you put the modules into your job script as above, you should not need to have them in your .bashrc, but it's probably worth doing so just to be on the safe side, unless doing so causes clashes with other programs you want to use.