Getting Started

This chapter describes how to use GridTools to solve a (simple) PDE. We will use a fourth-order horizontal smoothing filter to explain the necessary steps to assemble a stencil from scratch. We will not go into details in this chapter but refer to later chapters for more details.

Our example PDE is given by

\[\begin{split}\frac{\partial \phi}{\partial t} =\begin{cases} - \alpha \nabla^4 \phi & z \leq z_\text{0}\\ 0 & z > z_0 \end{cases}\end{split}\]

where \(\nabla^4\) is the squared two dimensional horizontal Laplacian and we apply the filter only up to some maximal \(z_0\) (to make the example more interesting). The filter is calculated in two steps: first we calculate the Laplacian of \(\phi\)

\[L = \Delta \phi = \left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\right) \phi\]

then we calculate the Laplacian of \(L\) as \(-\alpha \nabla^4 \phi = -\alpha \Delta L\).

In the following we will walk through the following steps:

  • The GridTools coordinate system and its notation.

  • Storages: how does GridTools manage the input and output fields.

  • The first stencil: calculating \(L\), the second order Laplacian of \(\phi\).

  • The final stencil: function calls, apply-method overloads and temporaries

Coordinate System

For a finite difference discretization we restrict the field \(\phi \in \mathbb{R}^3\) to a discrete grid. We use the notation \(i = x_i\) and \(j = y_j\) for the horizontal dimension and \(k = z_k\) for the vertical dimension, where \(x_i, y_j, z_k\) are the \(x,y,z\) coordinates restricted on the grid. The computation domain is defined by all grid points in our domain of interest

\[\Lambda = (i,j,k) \quad \text{with}\quad i \in \{ 0\dots N_i-1\}, j \in \{0\dots N_j-1\}, k\in\{0 \dots N_k-1\}\]

GridTools supports any number of dimension, however the iteration is always restricted to three dimensions, and GridTools will treat one dimension, here the \(k\), dimension differently: the \(ij\)-plane is executed in parallel while the computation in \(k\) can be sequential. The consequence is that there must not be a dependency in \(ij\) within a stencil while there can be a dependency in \(k\). For now (this chapter) it is sufficient to just remember that the \(ij\)-plane and the \(k\) dimension are treated differently by GridTools.

The calculation domain is surrounded by a boundary region as depicted in Fig. 1. Computation happens only within the calculation domain but values may be read from grid points in the boundary region.

../_images/coordinates.png

Fig. 1 Coordinate system

Storages

In this section we will set up the fields for our example: we need a storage for the \(\phi\)-field (phi_in) and a storage for the output (phi_out).

Storages in GridTools are n-dimensional array-like objects with the following capabilities:

  • access an element with \((i,j,k)\) syntax

  • synchronization between CPU memory and a device (e.g. a CUDA capable GPU)

Backend

Since the storages (and other things later) depend on the architecture (e.g. CPU or GPU) our first step is to define the backend type which typically looks like

using backend_t = backend::cuda;

for the CUDA Backend or

using backend_t = backend::mc;

for the CPU Backend.

The Storage Type

For efficient memory accesses the index ordering might depend on the target architecture, therefore the memory layout will be implicitly decided by target via the storage traits as follows.

For each storage type we need to define the data type of the data we want to store in the field, e.g. double, and a storage_info type which will hold information about size, alignment, strides etc. When creating the storage_info via the storage traits we need to provide a unique identifier and the number of dimensions for the storage (typically 3).

using storage_info_t = storage_traits<backend_t>::storage_info_t<0, 3>;
using data_store_t = storage_traits<backend_t>::data_store_t<double, storage_info_t>;

At run-time a storage_info is initialized with the sizes of the field. Then a field can be instantiated with the info object.

uint_t Ni = 10;
uint_t Nj = 12;
uint_t Nk = 20;

storage_info_t info(Ni, Nj, Nk);

data_store_t phi(info, -1., "phi");
data_store_t lap(info, -1., "lap");

The first argument, the info object, is mandatory, while the other arguments are optional: a name and an initial value for the field.

Note

For each storage_info type it is recommended to use only one instantiation. The mapping between a storage and the run-time information in the storage_info has to be done at compile time via the index. Thus GridTools cannot distinguish the storages by the run-time sizes passed to the storage_info. Using data_store s with same storage_info type but different run-time sizes in the same computation is undefined behaviour.

We can now

  • retrieve the name of the field,

  • create a view and read and write values in the field using the parenthesis syntax,

  • synchronize data between device and host (in CUDA mode).

std::cout << phi.name() << "\n";

auto phi_view = make_host_view(phi);
phi_view(1, 2, 3) = 3.1415;
std::cout << "phi_view(1, 2, 3) = " << phi_view(1, 2, 3) << std::endl;

phi.sync();

Stencils

A stencil is a kernel that updates array elements according to a fixed access pattern.

Example: Naive 2D Laplacian

The simplest discretization of the 2D Laplacian is the finite difference five-point stencil as depicted in Fig. 2.

../_images/Laplacian2D.png

Fig. 2 Access pattern of a 2D Laplacian

For the calculation of the Laplacian at a given grid point we need the value at the grid point itself and its four direct neighbors along the Cartesian axis.

A naive C++ implementation of the 2D Laplacian stencil looks as follows:

void laplacian(storage_view_t &lap, storage_view_t &in, int boundary_size) {
    int Ni = in.total_length<0>();
    int Nj = in.total_length<1>();
    int Nk = in.total_length<2>();
    for (int i = boundary_size; i < Ni - boundary_size; ++i) {
        for (int j = boundary_size; j < Nj - boundary_size; ++j) {
            for (int k = boundary_size; k < Nk - boundary_size; ++k) {
                lap(i, j, k) = -4.0 * in(i, j, k)                  //
                               + in(i + 1, j, k) + in(i - 1, j, k) //
                               + in(i, j + 1, k) + in(i, j - 1, k);
            }
        }
    }
}

Apart from the initialization the stencil implementation consists of 2 main components:

  • Loop-logic: defines the stencil application domain and loop order

  • Update-logic: defines the update formula (here: the 2D Laplacian)

Special care had to be taken at the boundary of the domain. Since the Laplacian needs the neighboring points, we cannot calculate the Laplacian on the boundary layer and have to exclude it from the loop.

First GridTools Stencil

In GridTools the loop logic and the storage order are implemented (and optimized) by the library while the update function to be applied to each gridpoint is implemented by the user. The loop logic (for a given architecture) is combined with the user-defined update function at compile-time by template meta-programming.

Update-logic: GridTools 2D Laplacian

The update-logic is implemented with state-less functors. A GridTools functor is a struct or class providing a static method called apply. The update-logic is implemented in these Apply-Methods. As the functors are state-less (no member variables, static methods only) they can be passed by type, i.e. at compile-time, and therefore allow for compile-time optimizations.

using namespace gridtools::expressions;

#ifdef __CUDACC__
using backend_t = backend::cuda;
#else
using backend_t = backend::mc;
#endif

using storage_info_t = storage_traits<backend_t>::storage_info_t<0, 3, halo<1, 1, 0>>;
using data_store_t = storage_traits<backend_t>::data_store_t<double, storage_info_t>;

constexpr static gridtools::dimension<1> i;
constexpr static gridtools::dimension<2> j;
constexpr static gridtools::dimension<3> k;

struct lap_function {
    using in = in_accessor<0, extent<-1, 1, -1, 1>>;
    using lap = inout_accessor<1>;

    using param_list = make_param_list<in, lap>;

    template <typename Evaluation>
    GT_FUNCTION static void apply(Evaluation const &eval) {
        eval(lap(i, j, k)) = -4. * eval(in(i, j, k)) //
                             + eval(in(i + 1, j, k)) //
                             + eval(in(i, j + 1, k)) //
                             + eval(in(i - 1, j, k)) //
                             + eval(in(i, j - 1, k));
    }
};

In addition to the apply-method, the functor contains accessor s. These two accessor s are parameters of the functor, i.e. they are mapped to fields passed to the functor. They contain compile-time information if they are only used as input parameters, e.g. the in accessor in the example, or if we want to write into the associated field (inout). Additionally, the extent defines which grid points are needed by the stencil relative to the current point. The format for the extent is

extent<i_minus, i_plus, j_minus, j_plus, k_minus, k_plus>

where i_minus and i_plus define an interval on the \(i\)-axis relative to the current position; i_minus is the negative offset, i.e. zero or a negative number, while i_plus is the positive offset. Analogously for \(j\) and \(k\). In the Laplacian example, the first two numbers in the extent of the in accessor define that we want to access the field at \(i-1\), \(i\) and \(i+1\). The accessor type and the extent is needed for a dependency analysis in the compile-time optimizations for more complex stencils. (For example, the computation domain needs to be extended when we calculate the Laplacian of the Laplacian later. This is done automatically by the library.)

The first template argument is an index defining the order of the parameters, i.e. the order in which the fields are passed to the functor. The param_list is a GridTools keyword which has to be defined for each stencil, and should contain th elist of accessors.

A apply-method needs as first parameter a context object, usually called eval, which is created and passed to the method by the library on invocation. This object contains, among other things, the index of the active grid point (Iteration Point) and the mapping of data-pointers to the accessor s. The second argument is optional and specifies the interval on the \(k\)-axis where this implementation of the Apply-Method should be executed. This allows to apply a different update-logic on Vertical Intervals by overloading the Apply-Method. We will define Vertical Intervals later. If the second parameter is not specified, a default interval is assumed.

The body of the apply-method looks quite similar to the one in the naive implementation, except that each field access has to be wrapped by a call to the context object eval. This is necessary to map the compile-time parameter, the accessor, to the run-time data in the data_store.

Calling the Stencil

In the naive implementation, the call to the laplacian is as simple as

int boundary_size = 1;
laplacian( lap, phi, boundary_size );

since it contains already all the information: the update-logic and the loop-logic.

The GridTools stencil, does not contain any information about the loop-logic, i.e. about the domain where we want to apply the stencil operation, since we need to specify it in a platform-independent syntax, a domain specific embedded language (DSEL), such that the Backend can decide on the specific implementation.

For our example this looks as follows

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
uint_t Ni = 10;
uint_t Nj = 12;
uint_t Nk = 20;

storage_info_t info(Ni, Nj, Nk);

data_store_t phi(info, -1., "phi");
data_store_t lap(info, -1., "lap");

using arg_phi = arg<0, data_store_t>;
using arg_lap = arg<1, data_store_t>;

int halo_size = 1;
halo_descriptor boundary_i(halo_size, halo_size, halo_size, Ni - halo_size - 1, Ni);
halo_descriptor boundary_j(halo_size, halo_size, halo_size, Nj - halo_size - 1, Nj);
auto my_grid = make_grid(boundary_i, boundary_j, Nk);

auto laplacian = make_computation<backend_t>(          //
    my_grid,                                           //
    make_multistage(                                   //
        execute::parallel(),                           //
        make_stage<lap_function>(arg_phi(), arg_lap()) //
        ));                                            //

laplacian.run(arg_phi{} = phi, arg_lap{} = lap);

In line 10 and 11 we define placeholders for the fields.

In lines 13-16 we setup the physical dimension of the problem. First we define which points on the \(i\) and the \(j\)-axis belong to the computational domain and which points belong to the boundary (or a padding region). For now it is enough to know that these lines define a region with a boundary of size 1 surrounding the \(ij\)-plane. In the next lines the layers in \(k\) are defined. In this case we have only one interval. We will discuss the details later.

In lines 18-23 we create the stencil object. We pass the grid (the information about the loop bounds) and a so-called multistage. The multistage contains a single stage, our Laplacian functor.

In more complex codes we can combine multiple \(k\)-independent stages in a multi_stage. If we have a \(k\)-dependency we have to split the computation into multiple multi_stages.

The statement execute<parallel> defines that all ij-layers can be calculated independently. Other execution modes are forward and backward. For performance reason parallel should be used whenever possible.

In the last line the stencil is executed. The Data Stores phi and lap are bound to its placeholders.

Full GridTools Laplacian

The full working example looks as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
#include <gridtools/stencil_composition/stencil_composition.hpp>
#include <gridtools/storage/storage_facility.hpp>

#include <gridtools/stencil_composition/accessor.hpp>

using namespace gridtools;
using namespace gridtools::expressions;

#ifdef __CUDACC__
using backend_t = backend::cuda;
#else
using backend_t = backend::mc;
#endif

using storage_info_t = storage_traits<backend_t>::storage_info_t<0, 3, halo<1, 1, 0>>;
using data_store_t = storage_traits<backend_t>::data_store_t<double, storage_info_t>;

constexpr static gridtools::dimension<1> i;
constexpr static gridtools::dimension<2> j;
constexpr static gridtools::dimension<3> k;

struct lap_function {
    using in = in_accessor<0, extent<-1, 1, -1, 1>>;
    using lap = inout_accessor<1>;

    using param_list = make_param_list<in, lap>;

    template <typename Evaluation>
    GT_FUNCTION static void apply(Evaluation const &eval) {
        eval(lap(i, j, k)) = -4. * eval(in(i, j, k)) //
                             + eval(in(i + 1, j, k)) //
                             + eval(in(i, j + 1, k)) //
                             + eval(in(i - 1, j, k)) //
                             + eval(in(i, j - 1, k));
    }
};

int main() {
    uint_t Ni = 10;
    uint_t Nj = 12;
    uint_t Nk = 20;

    storage_info_t info(Ni, Nj, Nk);

    data_store_t phi(info, -1., "phi");
    data_store_t lap(info, -1., "lap");

    using arg_phi = arg<0, data_store_t>;
    using arg_lap = arg<1, data_store_t>;

    int halo_size = 1;
    halo_descriptor boundary_i(halo_size, halo_size, halo_size, Ni - halo_size - 1, Ni);
    halo_descriptor boundary_j(halo_size, halo_size, halo_size, Nj - halo_size - 1, Nj);
    auto my_grid = make_grid(boundary_i, boundary_j, Nk);

    auto laplacian = make_computation<backend_t>(          //
        my_grid,                                           //
        make_multistage(                                   //
            execute::parallel(),                           //
            make_stage<lap_function>(arg_phi(), arg_lap()) //
            ));                                            //

    laplacian.run(arg_phi{} = phi, arg_lap{} = lap);
} // end marker

There are some points which we did not discuss so far. For a first look at GridTools these can be considered fixed patterns and we won’t discuss them now in detail. In brief:

  • In order to use the \((i,j,k)\) syntax we need to define the symbols to point to the respective dimensions.

  • A common pattern is to use the preprocessor flag __CUDACC__ to distinguish between CPU and GPU code. We use this to set the Backend.

The code example can be compiled using the following simple CMake script (requires an installation of GridTools, see Installation and Use).

cmake_minimum_required(VERSION 3.12.4)

project(GridTools-laplacian LANGUAGES CXX)

find_package(GridTools 0.22.0 REQUIRED)

add_executable(gt_laplacian test_gt_laplacian.cpp)
target_link_libraries(gt_laplacian GridTools::gridtools)

Assembling Stencils: Smoothing Filter

In the preceding section we saw how a first simple GridTools stencil is defined and executed. In this section we will use this stencil to compute our example PDE. A naive implementation could look as in

void naive_smoothing(storage_view_t &out, storage_view_t &in, double alpha, int kmax) {
    int lap_boundary = 1;
    int full_boundary = 2;

    int Ni = in.total_length<0>();
    int Nj = in.total_length<1>();
    int Nk = in.total_length<2>();

    // Instantiate temporary fields
    storage_info_t info(Ni, Nj, Nk);
    data_store_t lap_storage(info);
    auto lap = make_host_view(lap_storage);
    data_store_t laplap_storage(info);
    auto laplap = make_host_view(laplap_storage);

    // laplacian of phi
    laplacian(lap, in, lap_boundary);
    // laplacian of lap
    laplacian(laplap, lap, full_boundary);

    for (int i = full_boundary; i < Ni - full_boundary; ++i) {
        for (int j = full_boundary; j < Nj - full_boundary; ++j) {
            for (int k = full_boundary; k < Nk - full_boundary; ++k) {
                if (k < kmax)
                    out(i, j, k) = in(i, j, k) - alpha * laplap(i, j, k);
                else
                    out(i, j, k) = in(i, j, k);
            }
        }
    }
}

For the GridTools implementation we will learn three things in this section: how to define special regions in the \(k\)-direction; how to use GridTools temporaries and how to call functors from functors.

apply-method overload

Our first GridTools implementation will be very close to the naive implementation: we will call two times the Laplacian functor from the previous section and store the result in two extra fields. Then we will call a third functor to compute the final result. This functor shows how we can specialize the computation in the \(k\)-direction:

struct smoothing_function_1 {
    using phi = in_accessor<0>;
    using laplap = in_accessor<1>;
    using out = inout_accessor<2>;

    using param_list = make_param_list<phi, laplap, out>;

    constexpr static double alpha = 0.5;

    template <typename Evaluation>
    GT_FUNCTION static void apply(Evaluation &eval, lower_domain) {
        eval(out(i, j, k)) = eval(phi(i, j, k)) //
                             - alpha * eval(laplap(i, j, k));
    }

    template <typename Evaluation>
    GT_FUNCTION static void apply(Evaluation &eval, upper_domain) {
        eval(out(i, j, k)) = eval(phi(i, j, k));
    }
};

We use two different Vertical Intervals, the lower_domain and the upper_domain, and provide an overload of the Apply-Method for each interval.

The Vertical Intervals are defined as

using axis_t = axis<2>;
using lower_domain = axis_t::get_interval<0>;
using upper_domain = axis_t::get_interval<1>;

The first line defines an axis with 2 Vertical Intervals. From this axis retrieve the Vertical Intervals and give them a name.

Then we can assemble the computation

data_store_t phi(info);
data_store_t phi_new(info);
data_store_t lap(info);
data_store_t laplap(info);

using arg_phi = arg<0, data_store_t>;
using arg_phi_new = arg<1, data_store_t>;
using arg_lap = arg<2, data_store_t>;
using arg_laplap = arg<3, data_store_t>;

halo_descriptor boundary_i(halo_size, halo_size, halo_size, Ni - halo_size - 1, Ni);
halo_descriptor boundary_j(halo_size, halo_size, halo_size, Nj - halo_size - 1, Nj);
auto my_grid = make_grid(boundary_i, boundary_j, axis_t{kmax, Nk - kmax});

auto smoothing = make_computation<backend_t>(              //
    my_grid,                                               //
    make_multistage(                                       //
        execute::parallel(),                               //
        make_stage<lap_function>(arg_phi(), arg_lap()),    //
        make_stage<lap_function>(arg_lap(), arg_laplap()), //
        make_stage<smoothing_function_1>(                  //
            arg_phi(),                                     //
            arg_laplap(),                                  //
            arg_phi_new())                                 //
        ));                                                //

smoothing.run(arg_phi{} = phi, arg_lap{} = lap, arg_laplap{} = laplap, arg_phi_new{} = phi_new);

In this version we needed to explicitly allocate the temporary fields lap and laplap. In the next section we will learn about GridTools temporaries.

GridTools Temporaries

GridTools temporary storages are storages with the lifetime of the computation, i.e. they can be used by different stages assembled in one make_computation call. This is exactly what we need for the lap and laplap fields.

Note

Note that temporaries are not allocated explicitly and we cannot access them from outside of the computation. Therefore, sometimes it might be necessary to replace a temporary by a normal storage for debugging.

To use temporary storages we don’t need to change the functors or the make_computation. We just have to replace the type the arg by a tmp_arg. We don’t need the explicit instantiations any more and don’t have to bind the tmp_arg to storages. The new code looks as follows

data_store_t phi(info);
data_store_t phi_new(info);

using arg_phi = arg<0, data_store_t>;
using arg_phi_new = arg<1, data_store_t>;
using arg_lap = tmp_arg<2, data_store_t>;
using arg_laplap = tmp_arg<3, data_store_t>;

halo_descriptor boundary_i(halo_size, halo_size, halo_size, Ni - halo_size - 1, Ni);
halo_descriptor boundary_j(halo_size, halo_size, halo_size, Nj - halo_size - 1, Nj);
auto my_grid = make_grid(boundary_i, boundary_j, axis_t{kmax, Nk - kmax});

auto smoothing = make_computation<backend_t>(              //
    my_grid,                                               //
    make_multistage(                                       //
        execute::parallel(),                               //
        make_stage<lap_function>(arg_phi(), arg_lap()),    //
        make_stage<lap_function>(arg_lap(), arg_laplap()), //
        make_stage<smoothing_function_1>(                  //
            arg_phi(),                                     //
            arg_laplap(),                                  //
            arg_phi_new())                                 //
        ));                                                //

smoothing.run(arg_phi{} = phi, arg_phi_new{} = phi_new);

The temporary storages are allocated in the call to make_computation and freed in the destructor of the computation. Besides the simplifications in the code (no explicit storage needed), the concept of temporaries allows GridTools to apply optimization. While normal storages have a fixed size, temporaries can have block-private Halos which are used for redundant computation.

Note

It might be semantically incorrect to replace a temporary with a normal storage, as normal storages don’t have the Halo region for redundant computation. In such case several threads (OpenMP or CUDA) will write the same location multiple times. As long as all threads write the same data (which is a requirement for correctness of GridTools), this should be no problem for correctness on current hardware (might change in the future) but might have side-effects on performance.

Note

This change from normal storages to temporaries did not require any code changes to the functor.

Functor Calls

The next feature we want to use is the stencil function call. In the first example we computed the Laplacian and the Laplacian of the Laplacian explicitly and stored the intermediate values in the temporaries. Stencil function calls will allow us do the computation on the fly and will allow us to get rid of the temporaries.

Note

Note that this is not necessarily a performance optimization. It might well be that the version with temporaries is actually the faster one.

In the following we will remove only one of the temporaries. Instead of calling the Laplacian twice from the make_computation, we will move one of the calls into the smoothing functor. The new smoothing functor looks as follows

struct smoothing_function_3 {
    using phi = in_accessor<0>;
    using lap = in_accessor<1, extent<-1, 1, -1, 1>>;
    using out = inout_accessor<2>;

    using param_list = make_param_list<phi, lap, out>;

    constexpr static double alpha = 0.5;

    template <typename Evaluation>
    GT_FUNCTION static void apply(Evaluation &eval, lower_domain) {
        eval(out(i, j, k)) = eval(phi(i, j, k)) - alpha *                //
                                                      call<lap_function> //
                                                      ::with(eval, lap());
    }

    template <typename Evaluation>
    GT_FUNCTION static void apply(Evaluation &eval, upper_domain) {
        eval(out(i, j, k)) = eval(phi(i, j, k));
    }
};

In call we specify the functor which we want to apply. In with the eval is forwarded, followed by all the input arguments for the functor. The functor in the call is required to have exactly one inout_accessor which will be the return value of the call. Note that smoothing_function_3 still needs to specify the extents explicitly; for functor calls they cannot be inferred automatically.

One of the make_stage<lap_function> was now moved inside of the functor, therefore the new call to make_computation is just

data_store_t phi(info);
data_store_t phi_new(info);

using arg_phi = arg<0, data_store_t>;
using arg_phi_new = arg<1, data_store_t>;
using arg_lap = tmp_arg<2, data_store_t>;

halo_descriptor boundary_i(halo_size, halo_size, halo_size, Ni - halo_size - 1, Ni);
halo_descriptor boundary_j(halo_size, halo_size, halo_size, Nj - halo_size - 1, Nj);
auto my_grid = make_grid(boundary_i, boundary_j, axis_t{kmax, Nk - kmax});

auto smoothing = make_computation<backend_t>(           //
    my_grid,                                            //
    make_multistage(                                    //
        execute::parallel(),                            //
        make_stage<lap_function>(arg_phi(), arg_lap()), //
        make_stage<smoothing_function_3>(               //
            arg_phi(),                                  //
            arg_lap(),                                  //
            arg_phi_new())                              //
        ));                                             //

smoothing.run(arg_phi{} = phi, arg_phi_new{} = phi_new);

The attentive reader may have noticed that our first versions did more work than needed: we calculated the Laplacian of the Laplacian of phi (\(\Delta \Delta \phi\)) for all \(k\)-levels, however we used it only for \(k<k_\text{max}\). In this version we do a bit better: we still calculate the Laplacian (\(L = \Delta \phi\)) for all levels but we only calculate \(\Delta L\) for the levels where we need it.