ParaView-Catalyst Blueprint

This page is applicable to ParaView Catalyst implementation introduced in ParaView 5.9.


Background

Starting with ParaView 5.9, ParaView distribution provides an implementation of the Catalyst API. This implementation is now referred to as ParaView-Catalyst. Thus, it an implementation of the Catalyst In Situ API that uses ParaView for data processing and rendering.

The Catalyst in situ API comprises for 3 main function calls that are used to pass data and control over to the Catalyst implementation from a computational simulation codes: catalyst_initialize, catalyst_execute, and catalyst_finalize. Each of these functions is passed a Conduit Node object. Conduit Node provides a flexible mechanism for describing hierarchical in-core scientific data. Since Conduit Node is simply a light-weight container, we need to define the conventions used to communicate relevant data in each of the Catalyst API calls. Such conventions are collectively referred to as the Blueprint. Each Catalyst API implementation can develop its own blueprint. ParaView-Catalyst, the Catalyst API implementation provided by ParaView, also defines its blueprint called the ParaView-Catalyst Blueprint. This page documents this blueprint.

Protocol

A blueprint often includes several protocols, each defining the conventions for a specific use-case. Since there are three main Catalyst API functions, the blueprint currently defines three protocols, one for each of the Catalyst API functions.

In each of the Catalyst API calls, ParaView looks for a top-level node named 'catalyst'. The expected children vary based on the protocol described in the following sub-sections.

These top-level protocols use other internal protocols e.g. 'channel'.

protocol: 'initialize'

Currently, 'initialize' protocol defines how to pass scripts to load for analysis.

Additionally, one can provide a list of pre-compiled pipelines to use.

In MPI-enabled builds, ParaView is by default initialized to use MPI_COMM_WORLD as the global communicator. A specific MPI communicator can be provided as follows:

protocol: 'execute'

Defines now to communicate data during each time-iteration.

time/timestep/cycle: this defines temporal information about the current invocation.

channels: channels are used to communicate simulation data. The channels node can have one or more children, each corresponding to a named channel. A channel represents a data-source in the analysis pipeline that is linked to the data being produced by the simulation code.

The 'channel' protocol is as follows:

protocol: 'finalize'

Currently, this is empty.

protocol: 'pipeline'

Defines type and parameters for a hard-coded pipeline.

When 'type' is 'io', the following attributes are supported.

protocol: 'multimesh'

This is the protocol used for the 'channel/data' when the 'channel/type' is set to "multimesh".

  channel/assembly/
                   AllBlocks: ["blockA", "blockB"]
                   BlockA: "blockA"
                   BlockB: "blockB"
                   SubGroup/
                          AnotherChild: "blockA"