# Unstructured (COMPAS)

**Introduction**

The EMS hydrodynamic model, SHOC, utilizes a spare coordinate system internally to organize its data. This is essentially an unstructured coordinate system for curvilinear grids. It is possible to generalize this system to be suitable for any shaped grid (e.g. triangles, squares, hexagons), which would then make it suitable for finite volume algorithms. In recognition of the superiority of unstructured meshes over orthogonal curvilinear grids in complex coastal environments, it is considered desirable to transition SHOC into a full finite volume code. This essentially results in a new hydrodynamic model, albeit with full backwards compatibility regarding the peripheral infrastructure of the model (core libraries, sediment, BGC, wave libraries, IO, parameter file configuration). The process of construction of such a model involves generalization of the unstructured coordinate system, and replacement of some of the dynamic core with finite volume equivalent algorithms.

Both the unstructured generalisation and replacement of dynamic core algorithms follow the MPAS-O (https://mpas-dev.github.io/) framework as described by Ringler et al (2010) and Ringler et al. (2013). This framework uses placement of variables on a C grid, and employs the vector invariant approach to solving momentum advection and the Coriolis term. The scheme conserves total energy and potential vorticity, and mass, velocity and potential vorticity evolve in a consistent manner. The framework is grid agnostic, such that the solution is valid for any orthogonal mesh. In the first instance, backward compatibility with SHOC is maintained by running the finite volume algorithms on the orthogonal curvilinear grids generated by SHOC, using the identical input parameter file. We retain the ‘z’ or sigma vertical coordinate of SHOC, with associated wetting and drying algorithm in the ‘z’ implementation. This implies the barotropic and baroclinic pressure formulation remain the same as SHOC. Similarly, vertical mixing of momentum is identically retained. The tangential velocity used in the Coriolis term is formulated according to Thuburn et al, (2009). This formulation has the desirable trait of supporting a stationary geostrophic mode. Output can either be in equivalent SHOC formats for structured curvilinear grids (including CF1.0 compliance) or the CF UGRID standard. While the MPAS-O model is a global model suitable for coupling with atmospheric and sea-ice models for investigation of climate impacts, the model proposed here is aimed for use at the coastal or regional sea scale. As such, it retains the full open boundary suite and turbulence closure suite offered by SHOC.

Parallel processing using OpenMP or MPI is included in COMPAS following the methodology utilized in SHOC. This uses slave-slave information transfer for state variable information and master-slave transfer for forcing data. It is anticipated that this approach will be optimized so that hybrid parallel processing is accomplished where OpenMP is used within a multi-core node and MPI is used to partition the grid across nodes.

This page documents the evolution of the new code, demonstrating its performance in a series of test cases of increasing complexity. Ultimately this exercise is hoped to conclude with full functionality of the SHOC equivalent, with the advantage of supporting a full range of unstructured meshes. Currently the basic model is functional, as demonstrated in the following tests, however, some key functionality remains absent.

The following examples are all performed on orthogonal curvilinear (quadrilateral) grids.

**1. Unstructured coordinate system**

Although SHOC is a finite difference model, it uses an unstructured coordinate system (the ‘sparse’ or ‘compressed’ system; Herzfeld, 2006) internally, where cell neighbours are accessed via indirect mapping, and vectors are created which contain ‘cells to process’ for cell center variables and u1 & u2 velocity. This system is specific to curvilinear (quadrilateral) Arakawa C grids, but can be generalized for any C grid to render it suitable for finite volume algorithms. We retain the structure of the SHOC coordinate system for COMPAS. This differs with respect to MPAS in that a 3D mapping is made, and each vertical layer is not referenced using a k index. Vectors for 2D and 3D variables exist, with the former being a subset of the latter.

Land boundaries are associated with a ‘ghost’ cell so that lateral boundary conditions can be specified. For cell centered variables this is a no-gradient condition, for normal velocity a zero flux condition and tangential velocity a free or no slip condition. Generally a ghost cell is not required to achieve this, and a land mask vector would accomplish the same effect. However, SHOC contains a semi-Lagrangian transport model which we hope to include in COMPAS, and this uses streamline tracing off cell centres which requires normal velocity in the first lateral land cell to assume the negative value of the first wet cell adjacent to land in order to make a zero flux condition at the land edge. It is easy to accomplish this using the land ghost cells, hence we retain this approach in COMPAS. Vectors exist containing all ghost cells and their wet neighbours to set the lateral conditions. Note that in some cases (outside corners, land bridges one cell wide) multiple ghost cells are required at the same geographic location in order to uniquely specify a lateral boundary condition (e.g. different values are required on a land bridge one cell wide depending on from which direction the bridge is approached). This is also easily accomplished in the unstructured system, as extra ghost cells can be created wherever and whenever required. Cells at the limits of the domain (e.g. land ghost cells, sediment cells, open boundary cells) are self-mapping, so that a no-gradient condition naturally occurs when using higher order numerics. See Section 12.2 Science Manual for further details.

The cells to process vectors for 2D and 3D variables are organized so that the first v2 (or v3) cells represent wet cells, v2+1 to b2 represent open boundary cells, b2+1 to a2 represent ‘auxiliary’ cells for MPI processing (cells that belong to an alternative partition) and a2+1 to n2 represent land ghost cells (see Section 13.2 Science Manual). In this manner, looping over certain subsections of the vectors can isolate a particular class of cell. Additionally, separate vectors exist for open boundary and ghost cells.

Further details of the SHOC coordinate system can be found in Section 12, Science Manual. Although we use the same concepts in COMPAS, the implementation is slightly different, as vectors of centres, edges and vertices are created, and these (and their neighbours) are created on the basis of cells sharing geographic locations of edges, or edges sharing vertices.

**2. Grids**

EMS contains the triangular mesh generator ‘triangle’ written by J. Shewchuck (https://www.cs.cmu.edu/~quake/triangle.html ) as a library function for the purpose of interpolation of unstructured data onto a structured grid. This is particularly useful for interpolation of sparse ‘mud-map’ data for benthic sediment initial conditions, as these data are rarely supplied in regular gridded format. The triangle software can create Delaunay triangulations given a set of points and a perimeter and its Voronoi dual, and this has been adapted to create unstructured meshes both as a rectangular (metrics in m) or geographic rectangular (metrics in lat/long) specification (see Section 4.6 User’s Manual) or conversion of existing curvilinear meshes. The construction of the latter is achieved by creating triangles using the grid locations as a base, and the u2 velocity location as the apex (see Fig. 2.1).

Note that triangle creates constrained conforming Delaunay triangulations and its Voronoi dual, but not Centroidal Voronoi Tesselations (CVT). An alternative meshing tool, JIGSAW, written by Darren Engwirda (https://github.com/dengwirda/jigsaw/) produces high quality CVT meshes specifically designed to accommodate the MPAS framework. JIGSAW is available as a stand-alone executable, or a library function; the latter has been integrated into the EMS suite allowing meshes to be built inline. JIGSAW is the primary meshing tool used by COMPAS.

Open boundary specification is more challenging with unstructured meshes, as there may be multiple boundary edges associated with a boundary cell centre, and the start and end coordinates of each boundary edge must be provided to uniquely specify the open boundary. Specialist software will be required to assist with this process.

**3. Visualisation**

Although we output model data in UGRID v1.0 format (http://ugrid-conventions.github.io/ugrid-conventions/) we currently do not have a viewer for this format. Output is therefore limited to time-series at a point. The triangulation described above is used to map time series location (lat, long) to an unstructured index; this routine is available as a pre-existing library function in EMS.

**4. Test 1: 3×3 pool**

A pool of size 3×3 cells is subjected to a constant westerly wind of 0.1 Nm-2. An open boundary is located on the southern edge, which is forced with a semi-diurnal tide of 1 m. Only two layers exist, from 0 to 50 m and 50m to the bottom at 100 m. A constant vertical mixing coefficient of 0.05 m^{2}s^{-1} is used. Due to the small size of this domain, it constitutes a useful testbed to examine the cell mappings. The relationship between cell centres, edges and vertices is illustrated in Fig. 4.1 for the surface layer. The wet domain consists of the area enclosed by the thick black line, containing cells 1 – 9. Ghost cells are created adjacent to solid boundaries; these are used to set the lateral boundary conditions. Similar to SHOC, multiple ghost cells may be created at outside corners or headlands 1 cell wide. The open boundary is associated with two ghost cells so that higher order advection schemes may use interpolated open boundary information. We also create tangential ghost edges (edges 25 – 32) so that lateral velocity boundary conditions may be defined, however, these edges are currently not in use. Processing vectors are created for cells, edges and vertices that are subdivided in to wet locations to process (e.g. cells 1-9), open boundary locations (e.g. cells 7-9) and ghost locations (e.g. cells, 10-26). The mappings between cells, edges and vertices are defined according to Ringler et al, (2010) Table 2 and Fig. 4.2.

**5. Test 2: Closed Basin**

A closed basin with sloping bottom is subject to a constant westerly windstress of 0.1 Nm^{-2}, with initial stratification of 20^{o}C from 0-30 m and 10^{o}C below 40 m. Vertical closure is performed using the Harcourt (2015) scheme. Bottom depth is 50 m on the southern coast and 110 m on the northern coast. The sloping bottom induces topographic potential vorticity resulting in a western intensification of the flow (Southern Hemisphere). The unstructured model is presented in Fig. 5.1, and the SHOC solution in Fig 5.2, showing a comparable response for sea level and depth averaged flow after 10 days.

**6. Test 3: Test estuary**

A flat bottomed basin of 20 m depth is connected to a straight estuary whose depth linearly increases from 5 m at the head to 20 m at the mouth. The eastern boundary is open and forced with a semi-diurnal tide of 1 m amplitude. Freshwater inflow of 100 m^{3}s^{-1} is input at the head of the estuary. A westerly wind of 0.1 Nm^{-2} is also applied. The k-e vertical closure scheme is used. Surface salinity and flow solutions after 30 days are presented below in Figs. 6.1 and 6.2.

**7. Test 4: Real application – Fitzroy Estuary**

The Fitzroy Estuary (https://research.csiro.au/cem/projects/fitzroy-estuary/) is simulated from 1 June 2012 to 20 August 2012 subject to real surface fluxes (momentum, heat and freshwater), river flow and boundary forcing through an offshore open boundary and The Narrows. Boundary information is supplied by the eReefs regional 4 km model (https://research.csiro.au/ereefs/models/). An animation of the evolution of salinity and surface flow is presented below.

**8. Test 5: 3×3 hex pool**

This test is the same as Test1, except a hexagonal mesh is used to test the capability of non-quadrilateral cells. The mesh with centre, edge and vertex locations is shown in Fig. 8.1. In the first instance no open boundaries are applied, and the domain is forced with a westerly wind. The sea level and temperature solution for both hexagonal and the 3×3 quadrilateral (Fig. 4.1) solution is shown in Fig. 8.2, as well as the SHOC solution and distributed processed solutions for the unstructured model. All solutions show a decrease in temperature due to vertical mixing, with the quadrilateral grid ~0.2 DegC cooler after 20 days. Some difference is expected due to the geography of the test domain, however, the main goal of this test is to demonstrate that a viable solution can be achieved with hexagonal grids.

An open boundary is then allocated to cells 2 and 3 and a tide imposed as in Test1. Sea level and temperature solutions are displayed in Fig 8.3, showing that again realistic solutions are obtained using hex grids with an open boundary.

**9. Test 6: Hex closed basin**

This test is identical to Test2 except a hexagonal grid is used. Comparisons to the quadrilateral solutions are shown in Fig. 9.1, where again the hexagonal mesh is producing viable solutions.

**10. Test 7: Hex test estuary**

The test estuary of Test 3 is converted to hexagonal to result in a mesh illustrated in Fig. 10.1. As in Test 3, an open boundary exists on the eastern domain, and river flow is input at the estuary head in the western domain, and a westerly wind is applied. The hexagonal conversion doubles the resolution (two quadrilateral cells convert to one hexagonal cell) hence a mesh of different size to the curvilinear original results. Solutions are therefore expected to differ also. Time series of sea level and salinity is shown in Fig. 10.2.

**11. ROAM**

The ROAM functionality (https://research.csiro.au/cem/projects/bluelink-roam/) is operational in COMPAS. Both quad and hex grids are supported with the same automated relocatable features supported by SHOC. Figures 11.1-11.3 show a comparison of output for a region within the EAC on the Australian east coast at 10 km resolution. Figure 11.4 shows the same region at approximately half this resolution using the same parameterisation as specified by the ROAM automation.

ROAM functionality has also been extended to arbitrary polygons using a Gaussian weighting function that places high resolution at the grid centre and lower resolution at the edges. This grid uses the JIGSAW mesh generation software, which has been included in COMPAS as a library and integrated into the ROAM workflow. An example of this type of mesh over the EAC region is provided in Fig 11.5. This mesh is nested directly into the global BlueLINK model (10 km resolution) which provides the open boundary conditions with a tide superimposed, and surface momentum and heat fluxes are provided by the ACCESS meteorological model. The mesh has a mean resolution of 1830 m, and minimum and maximum resolution of 11.5 km and 825 m respectively.

**12. SETAS**

The JIGSAW library can be used to create coastline fitted meshes. In this case we create a mesh of south-east Tasmania, analogous to the STORM structured model; https://research.csiro.au/cem/projects/informd/. The model was initiated in August 2017 and is running in near realtime with output on OpenDAP (http://oa-20-cdc.it.csiro.au:8080/opendap/cache/compas/setas/hydro/nrt). The mesh produced by JIGSAW is illustrated in Fig. 12.1, delivering a mean, maximum and minimum resolution of 385 m, 117 m and 5039 m respectively, with 10071 surface cells. The model has a runtime of ~17:1 on 1 processor. This compares to a mean resolution of 215 m, 74200 surface cells and runtime of ~4:1 on 8 processors for STORM. Sample output of temperature and salinity is shown in Fig. 12.2 for midday on 5 Dec 2017 and elevation in Fig 12.3. Visualisation is performed using Godiva3 in conjunction with the UGRID-1.0 format.

**13. National Model**

A national model with resolution of 1-2 km at the coast has been an ambition for some time. Using a structured cyclic polar grid a pilot application has been developed; the ‘Ribbon Model’, https://research.csiro.au/cem/projects/ribbon-model/. This grid was difficult to construct, particularly ensuring the grid matches perfectly at the cyclic boundary. An unstructured approach can circumvent many of the issues faced when constructing a structured grid. A mesh was created using JIGSAW where a dual weighting function was used focussing on resolving high tidal amplitudes in shallow water, i.e. shallow regions experiencing large tides receive high resolution and vice versa. A cascading zoom illustrating the resolution transition in such a region (Flinders Island region) is shown in Fig. 13.1. The resolution of the whole mesh is illustrated in Fig. 13.2; the mean resolution of this mesh is 2.4 km, with maximum resolution of ~400 m and minimum of ~54 km. Example output is shown in Fig. 13.3. The flexibility of resolution placement using a weighting function in IGSAW makes this unstructured model an ideal choice to underpin a national model.

**14. FTC**

A monotonic tracer advection scheme is required that will operate on arbitrary polygons. Following MPAS we implement the flux corrected transport scheme of Zalesak (1979), using simple 2^{nd} order fluxes for the high order solution at this stage. Comparisons of the FCT scheme for the estuary test case are shown in Fig. 14.1 with the ULTIMATE QUICKEST scheme (considered our best scheme) included for reference.

**15. High order advection**

The higher order advection scheme of Skamarock and Gassmann (2012) has been implemented, which is the scheme currently favoured by MPAS. This approach uses a least squares polynomial to approximate the second derivatives required in higher order schemes. A weighting matrix corresponding to coefficients associated with the second derivative of the polynomial is pre-calculated using singular value decomposition, and the coefficient is recovered every time-step for each tracer via a simple vector multiplication. These higher order schemes are used with the FCT to render the scheme monotonic. A comparison of FCT schemes on the test estuary quad grid is shown in Fig 15.1, with the ULTIMATE QUICKEST scheme included for reference.

The least squares approach can be extended to approximate tracer values two cells distant from a given edge. For regular quad or hex meshes these can be retrieved directly from the mesh nodes, but for arbitrary polygons a reconstruction is necessary to obtain these values. If the least squares polynomial is constructed with the coordinate axis normal to the edge, then a value at 1.5Dn is required to be computed, where Dn is the distance from the edge to each adjacent cell centre, i.e. x = 1.5Dn and y = 0 in the polynomial so that only weights associated with coefficients that are a function of x need be saved. A schematic illustrating the implementation of least squares approximation to the QUICKEST algorithm is shown in Fig. 15.2. Vector multiplications are then required to retrieve the coefficients at each time-step for each tracer, and then another vector operation to retrieve the tracer value. These values (either side of the edge) can then be used in higher order structured advection schemes, e.g. ULTIMATE QUICKEST (Leonard, 1991) or Van Leer (Van Leer, 1979). The higher order ULTIMATE QUICKEST and Van Leer schemes (denoted unstructured QUICKEST and VanLeer respectively) are shown in Fig 15.3. It appears that all higher order schemes provide good solutions in this case, with the unstructured QUICKEST providing possibly the best solutions, but also being computationally expensive (see Table 15.1). The 3^{rd} order FCT and unstructured QUICKEST are compared for an arbitrary polygon mesh in Fig. 15.4.

Table 15.1 – Advection scheme CPU performance

Advection scheme |
Runtime ratio |
Normalized |

ULTIMATE QUICKEST structured | 7302:1 | 0.96 |

ULTIMATE QUICKEST unstructured | 5501:1 | 0.72 |

VANLEER structured | 7624:1 | 1 |

VANLEER unstructured | 6316:1 | 0.83 |

FCT ORDER2 | 7193:1 | 0.94 |

FCT ORDER3 | 5913:1 | 0.78 |

FCT ORDER4 | 5617:1 | 0.74 |

**16. Semi-Lagrange schemes**

SHOC contained a semi-Lagrange advection scheme using linear, 2^{nd} or 3^{rd} order interpolations. This scheme has the advantage that it is unconditionally stable, but the disadvantage that it is cast from the advective form of the conservation equation and is therefore not conservative. A flux-form semi-Lagrange (FFSL) scheme is also implemented to achieve conservation (Gillibrand and Herzfeld, 2025). Furthermore, a transport model is available in SHOC where velocity fields stored offline can be uploaded and used to transport tracers utilizing these semi-Lagrangian schemes with time-steps much longer than the hydrodynamic model is capable of. When this transport model is coupled to sediment transport or biogeochemical models, the runtime can decrease by orders of magnitude. It is highly desirable to include this functionality in COMPAS, and consequently the semi-Lagrangian and transport model code infrastructure has been included. Different to SHOC, the semi-Lagrangian streamline tracing operates on the geographic coordinates of the grid, rather than normalized distances from any grid cell centre. Interpolation schemes for the unstructured mesh consist of those available in the EMS core libraries (linear, cubic and natural neighbours) and the quadratic least squares polynomial described in Section 15. While the mechanics of the semi-Lagrange scheme and transport model in functional, as yet the interpolation schemes do not provide accurate solutions (e.g. Fig. 16.1). The natural neighbours scheme (not shown) showed promise, but is very computationally expensive. Optimisation of the polynomial used and cells included in the least squares fit is required for the least squares approach, which may improve results. An extension of the semi-Lagrange scheme to the FFSL scheme is also required to be implemented to achieve conservation.

**17. Conclusion**

The unstructured model currently contains basic functionality that is capable of performing downscaled coastal simulations. The model produces realistic (compared to SHOC) and stable simulations. Functionality is required to be expanded, with the highest priority given to:

- Optimization of MPI,
- Implementation of Flux Form Semi-Lagrangian transport model.

The code is publicly available on GitHub at https://github.com/csiro-coasts/EMS. Further information on COMPAS is also available on the EMS Documentation page.

**References**

Harcourt, R.R. (2015) An improved second-moment closure model of Langmuir turbulence. . Phys. Oceanog., 45, 84-103.

Gillibrand, P.A., Herzfeld, M. (2015) A mass-conserving advection scheme for offline simulation of tracer transport in coastal ocean models. Ocean Modelling. 101, 1-16.

Herzfeld, M. (2006). An alternative coordinate system for solving finite difference ocean models. Ocean Modelling, 14 (3-4): 174-196.

Leonard, B.P. (1991) The ULTIMATE conservative difference scheme applied to unsteady one-dimensional advection. *Comp. Methods in Appl. Mech. and Eng.*, 19, 17 – 74.

Ringler, T., Petersen, M., Higdon, R. L., Jacobsen, D., Jones, P. W., & Maltrud, M. (2013). Ocean Modelling. Ocean Modelling, 69(C), 211–232. doi:10.1016/j.ocemod.2013.04.010

Ringler, T., J. Thuburn, J. Klemp and W. Skamarock, 2010: A unified approach to energy conservation and potential vorticity dynamics on arbitrarily structured C-grids, Journal of Computational Physics, published online, doi:10.1016/j.jcp.2009.12.007

Skamarock, W.G, Gassmann, A. (2011) Conservative transport schemes for spherical geodesic grids: high-order flux operators for ODE-based time integration. Mon. Wea. Rev. 139, 2962-2975.

Thuburn, J., T. Ringler, J. Klemp and W. Skamarock, 2009: Numerical representation of geostrophic modes on arbitrarily structured C-grids, Journal of Computational Physics, 2009: 228 (22), 8321-8335. doi:10.1016/j.jcp.2009.08.006

Van Leer, B. (1979) Towards the ultimate conservative difference scheme. V: a second order sequel to Godanov’s method. *J. Comput. Phys.*, 32, 101-136.