PION simulation code

Photoionization Of Nebulae

PION code for astrophysical fluid dynamics with radiative transfer

Overview of pion

pion is a grid-based fluid dynamics code for hydrodynamics (HD) and magnetohydrodynamics (MHD), including a ray-tracing module for calculating the attenuation of radiation from point sources of ionizing photons, and a module for coupling fluid dynamics to microphysical processes. The algorithms are described in Mackey & Lim (2010), Mackey & Lim (2011), and Mackey (2012), with recent updated in the conference proceedings Mackey et al. (2020).

The code was written to model the evolution of HII regions, photoionized bubbles that form around hot stars, and developed to include stellar wind sources so that both wind bubbles and photoionized bubbles can be simulated at the same time.

There is old documentation of the code from 2010 (now partly outdated), including results from a suite of standard test problems to validate the code. Follow the link to read more. If you are interested in using the code for an astrophysics project, try running some test calculations and then contact me if you want to collaborate. The source code is hosted in a git repository at the DIAS gitlab server, and documentation can be found at pion.ie.

pion is described in the paragraphs below. Results and movies from simulations can be found in the research pages, and in links to publications.

Example: Betelgeuse

Simulation Snapshot

Above is a snapshot of a simulation using pion, taken from Mackey, Mohamed, Neilson, Langer, & Meyer, 2012, ApJ Letters, 751, L10. The bow shock produced by the runaway star Betelgeuse was simulated, assuming it has recently evolved from a blue supergiant to its current state as a red supergiant. This can partly explain some of the more puzzling aspects of Betelgeuse's circumstellar medium. Click on the image to open a movie of the simulation's full evolution. Read more here.

Introduction to the pion code

pion is written in C++, and is designed to be as object-oriented (i.e. modular) as possible. This means that, at least in principle, parts of the code can be taken out and plugged into other codes, and parts of other codes can be merged into pion. In practice, this is only really possible for things like chemistry networks and heating/cooling processes, because most other modules depend on the underlying data structures in the computational grid. The main advantage of object-oriented code is that if you edit one part of the code you are very unlikely to break another part; it helps with maintenance, debugging, and extending the code's capabilities.

The main code modules are:
  • Systems of equations: equations of inviscid HD, ideal MHD, and ideal MHD with the mixed-GLM divergence cleaning method (Dedner+,2002).
  • Coordinate systems: Cartesian coordinates in 1D, 2D, and 3D, Cylindrical coordinates in 2D (R,z), and spherical coordinates in 1D (r).
  • Hydro/MHD solvers: Roe-type solvers are implemented for HD and MHD, and flux-vector-splitting and a (slow) exact solver for HD.
  • Parallel code communication: a COMMS class using either text-file communicators or the Message Passing Interface (MPI).
  • Computational grid. The grid is a multiply-linked list of finite-volume cells (or zones). Most commonly-used boundary conditions are implemented. When run in parallel each process has a subdomain of the full grid, and inter-process communication is used to share boundary data. Static mesh-refinement is implemented with adaptive timestepping.
  • Microphysics: chemistry and heating/cooling processes. A number of different classes have been written for different situations.
  • Raytracing, on serial and parallel grids, from point sources or sources at infinity. This uses the short-characteristics raytracer, which is a bit diffusive but very fast and scales well.
  • Data input and output (I/O), including ASCII, FITS, and Silo formats.

Problems are set up with a parameter file and an initial-condition generator. This writes an output file, which the code-executable reads. The output files are also restart files, and contain all of the simulation parameters in the header. Many simulation parameters can also be over-ridden with command-line arguments. Almost all features of the code can be used without recompilation e.g. HD, MHD, photoionization, different coordinate systems, different dimensionality of the problem, etc. are all run-time parameters. This is achieved with inherited classes and interfaces defined by virtual base classes. Some features can be excluded with compile-time flags to make the executable smaller.

Features included in pion

  • Solving Euler or ideal MHD equations on serial and parallel grids, with any of the coordinate systems described above. First- or second-order accurate (in time and space) integration schemes are supported.
  • Calculation of column densities of neutral/ionised/all gas from radiation sources (on or off the grid domain, in serial and in parallel).
  • Passive tracers advected with the flow for e.g. chemical species.
  • Dedner et al. (2002) divergence cleaning for multi-dimensional MHD simulations (with some improvements).
  • A stellar wind inflow boundary condition, including time evolution (read from a text file) can be included in all coordinate systems. For cylindrical and Cartesian coordinates this takes the form of a circle/sphere within which the a freely expanding wind is imposed.
  • There are a few microphysics integrators available, based on the descriptions in the references above. A standard base class for microphysics defines the interface to the rest of the code, so it is relatively simple to add complex chemical networks and heating/cooling functions. Dealing with radiation fields and extinction is more complicated and only partially supported, so adding in extinction-dependent photo-rates is not trivial. An adaptive integrator using the CVODE solver (part of the SUNDIALS library) is available.
  • The code should run on unix/linux systems with standard GNU or Intel compilers, and also on OS X (10.6 and later) if Xcode is installed. The makefile may need modification for specialised systems.
  • The code has been used in parallel with at least 1024 MPI processes on different supercomputers (Stokes at ICHEC, JUROPA at Jülich Supercomputing Centre, SuperMUC at LRZ). Results of scaling tests are presented in Mackey (2012,A&A,539,A147).

Features NOT included in pion

  • Gravity, neither self-gravity nor an external potential field.
  • Multi-threaded execution with e.g. OpenMP (in development).
  • Adaptive mesh-refinement.

Parallelisation and scaling

  • The parallelisation is with MPI, where each process is allocated a subset of the computational grid and communicates with its neighbours to fit everything together.
  • The key design consideration was that the parallel version of the code should produce identical results to the serial version (to machine precision); this is the case now for all hydro/MHD problems tested. An exception is problems with chemistry integrated using CVODE - this integrates equations to a given relative-error tolerance, so machine-precision errors can change the number of iterations used and hence the solution.
  • Scaling is very good up to 1024 cores for large problems. A good rule-of-thumb is that each MPI process should have a subdomain with at least 32x32 cells in 2D, or 32x32x32 in 3D. For smaller subdomains the number of boundary cells becomes comparable to the number of grid cells, and the ratio of computation to communication become unfavourable.
  • The number of MPI processes must be a power of 2.

Plans for future development

  • Allowing MPI processes to control more than one sub-domain, and adding multithreading.
  • Multi-ion chemical kinetics including non-equilibrium photoionization.

Made with w3.css

© Jonathan Mackey 2017-2020.