Basic usage

This page shows how to use ASH once you have correctly set up ASH as a Python library. See Setup for information on installing ASH.

Input structure

You create a Python3 script (e.g. called and import the ASH library:

from ash import *   # This will import the most important ASH functionality into your namespace
import ash   # If you use this option you will have to add the "ash." prefix in front of ASH functions/classes.

ASH can only be imported if the ASH source dir has either been installed (or manually set in the PYTHONPATH). See Setup.

You then have the freedom of writing a Python script in whatever way you prefer but taking the advantage of ASH functionality. Typically you would first create one (or more) molecule fragments, then define a theory object and then call a specific job-module (singlepoint, an optimizer, numerical-frequencies etc.). See Basic examples for examples.

Example script

Here is a basic ASH Python script, e.g. named:

from ash import *

#Defining a numcores variable with the number of available CPU cores. Will be given to ORCA object.
#Create fragment from a multi-line Python string (Cartesian coordinates in Angstrom)
H 0.0 0.0 0.0
F 0.0 0.0 1.0
molecule = Fragment(coordsstring=fragstring, charge=0, mult=1)

#Defining string containing ORCA Simple-input line
orcasimpleinput="! BP86 def2-SVP tightscf"
#Multi-line string containing ORCA block-input:
%scf maxiter 200
#Defining ORCATheory object
ORCAcalc = ORCATheory(orcasimpleinput=orcasimpleinput, orcablocks=orcablocks, numcores=numcores)

#Geometry Optimization using the Optimizer (requires geomeTRIC library)
Optimizer(fragment=molecule, theory=ORCAcalc, coordsystem='tric')

The script above loads ASH, creates a new fragment from a string (see Coordinates and fragments for other ways), defines variables related to the ORCA-interface , creates an ORCA-theory object (see QM Interfaces and ORCA interface ), and runs a geometry optimization using the Optimizer optimizer function (see Job Types and Geometry optimization ).

Running script directly in the shell

For a very simple short job we can just run the script directly


The output will be written to standard output (i.e. your shell). To save the output it is better to redirect the output to a file.

python3 >& ashtest.out

For a really long job you would typically submit a jobscript to a queuing system instead. See discussion about the `subash <>`_ submission script at the bottom of this page.

Interactive ASH in a REPL or iPython environment

It is also possible to run ASH within a read-eval-print-loop environment such as iPython. This allows for interactive use of ASH. See video below for an example.

If ASH has been set up correctly and iPython is available (pip install ipython), then ASH within iPython should be straightforward. Make sure to use the iPython that uses the same Python environment as ASH.

Interactive ASH in a Google Colab notebook

Try out the ASH basics in this Google Colab notebook:

ASH in Google Colab

Requires a Google account.

ASH with Julia support

Some ASH functionality (primarily the molecular crystal QM/MM code) requires a working Python-Julia interface as some of the Python routines are too slow. ASH uses PythonCall for this.

ASH settings

Global settings are stored in /path/to/ash/ash/ and can in principle be modified. However, it is better to instead create a settings file called ash_user_settings.ini for your user in your home-directory that should look like below. Here you can set whether to use ANSI colors in output, whether to print inputfile and logo at the top, timings at the bottom etc. ASH will attempt to read this file on startup.

scale = 1.0
tol = 0.2
use_ANSI_color = True
print_input = True
print_logo = True
load_julia = True
julia_library = pythoncall
debugflag = False
print_exit_footer = True
print_full_timings = True
nonbondedMM_code = julia
connectivity_code = julia


The file ~/ash_user_settings.ini should not contain ' or "" symbols when defining strings.

In addition to options above it is also possible to specify the paths to various external codes. If these paths are set in the settings file, one can avoid defining them in the inputfiles.

orcadir = /path/to/orcadir
daltondir = /path/to/daltondir
xtbdir = /path/to/xtbdir
psi4dir = /path/to/psi4dir
cfourdir = /path/to/cfourdir
crestdir = /path/to/crestdir

Use of colors in ASH output

ASH can display ANSI colors in output if use_ANSI_color = True is used in the settings file (see above). This makes the output more readable.

Note, however, that colors will only display properly if using a text reader that supports it:

- less may require the -R flag: less -R outputfile. Or use the global setting: export LESS=-R
- vim and emacs require plugins

Submitting ASH jobs on a cluster

Once you start using ASH in general you would of course want to submit jobs to a queuing system on a cluster. Then it is best to have a submission script that sets up the ASH-Python environment, copies inputfiles to local scratch on the computing node, runs the ASH-Python script and copies results back from scratch to submission directory.

A submission script, subash , has been created for this purpose which can be found in the scripts directory of the ASH source code. It assumes the Slurm queuing system but should be fairly easily adaptable to other systems.

Simply download subash or copy/paste the script into a file called e.g. or subash:

#1. Download subash script from repository
#2. Copy or move it to a suitable location on your cluster
#3. Make it executable
chmod +x subash
#4. Make some changes to the script for your cluster (see below)

You will have to make the following changes to the subash script (lines 6-35):

  • Change the name of the default queue to submit to (depends on cluster)

  • Change the default walltime (depends on cluster and queue)

  • Change the name of the local/global scratch directory on each node (depends on cluster and maybe queue)

  • Change the path to the ASH environment file ( See more info below.

Using subash:

subash   # (where is your ASH script). subash will figure out cores if numcores variable define in
subash -p 8  # (where -p 8 indicates 8 cores to request from the queuing system)

When you use subash it will perform some checks such as whether the inputfile exists, whether you have provided CPU-core information etc. It will create a Slurm submission file in the current directory (called ash.job) and then submit the job to the queuing system. The number of CPU cores to use when submitting, can be provided as a flag to subash (should then match the number of cores requested for the Theory level in the inputfile).

However, our preferred way is to have the subash script automatically find out suitable number of CPU cores from the inputfile. This will work if you have a line containing: "numcores=X" in the Python script (as in the example above and throughout the documentation) and then make sure that this variable is used to define the number of cores in the Theory object (see e.g. how the ORCAcalc object is defined in the example script at the top of this page). Many ASH examples in the documentation use this approach.

subash options:

No .py file provided. Exiting...

Usage: subash      Dir should contain .py Python script.
Or: subash -p 8      Submit with 8 cores.
Or: subash -g 1:1GB      Submit with 1 GPU core and request 1 GB of memory.
Or: subash -m X      Memory setting per CPU (in GB): .
Or: subash -t T      Submit using T threads (for OpenMP/OpenMM/MKL).
Or: subash -s /path/to/scratchdir      Submit using specific scratchdir .
Or: subash -mw            Submit multi-Python job (multiple walkers) .
Or: subash -n node54      Submit to specific node (-n, --node).
Or: subash -q queuename    Submit to specific queue.

Configuring the ASH environmentfile:

If you followed the installation instructions for ASH (Setup) you should have created an ASH environment file : ~/ . This file can be moved anywhere you like but make sure you modify the ENVIRONMENTFILE variable in the subash script to point to it. It will be sourced (source $ENVIRONMENTFILE) right before python3 is launched (python3 in the job-submission file.

If you want to make sure ASH can automatically find external QM programs e.g. ORCA, MRCC, CFour, CP2K etc. as well as OpenMPI (used e.g. by ORCA) then it is best to add the necessary PATH and LD_LIBRARY_PATH definitions (or alternatively module load commands) to this ASH environment file.

Below is an example used on our cluster:


#ASH and Python environment

#PYTHONPATH for finding ASH generally not recommended. commented out
export PATH=$python3path:$PATH

ulimit -s unlimited


source /homelocal/rb269145/PROGRAMS/intel/oneapi/mkl/latest/env/




source /homelocal/rb269145/scripts/


source /homelocal/rb269145/scripts/


#PATH and LD_LIBRARY_PATH modifications for program paths defined above