Writing and Updating Clusters#
This guide provides a comprehensive walkthrough for creating a new Matter cluster implementation, referred to as a “code-driven” cluster.
Overview of the Process#
Writing a new cluster involves the following key stages:
Define the Cluster: Generate or update the cluster definition XML based on the Matter specification.
Implement the Cluster: Write the C++ implementation for the cluster’s logic and data management.
Integrate with Build System: Add the necessary files to integrate the new cluster into the build process.
Integrate with Application: Connect the cluster to an application’s code generation configuration.
Test: Add unit and integration tests to verify the cluster’s functionality.
Part 1: Cluster Definition (XML)#
Clusters are defined based on the Matter specification. The C++ code for them is
generated from XML definitions located in
src/app/zap-templates/zcl/data-model/chip
.
Generate XML: To create or update a cluster XML, use Alchemy to parse the specification’s
asciidoc
. Manual editing of XML is discouraged, as it is error-prone.Run Code Generation: Once the XML is ready, run the code generation script. It’s often sufficient to run:
./scripts/run_in_build_env.sh 'scripts/tools/zap_regen_all.py'
For more details, see the code generation guide.
Part 2: C++ Implementation#
File Structure#
Create a new directory for your cluster at
src/app/clusters/<cluster-directory>/
. This directory will house the cluster
implementation and its unit tests.
For zap-based support, the directory mapping is defined in
src/app/zap_cluster_list.json
under the ServerDirectories
key. This maps the UPPER_SNAKE_CASE
define of
the cluster to the directory name under src/app/clusters
.
Naming conventions#
Names vary, however to be consistent with most of the existing code use:
cluster-name-server
for the cluster directory nameClusterNameSnakeCluster.h/cpp
for theServerClusterInterface
implementationClusterNameSnakeLogic.h/cpp
for theLogic
implementation if applicable
Recommended Modular Layout#
For better testability and maintainability, we recommend splitting the implementation into logical components. The Software Diagnostics cluster is a good example of this pattern.
ClusterLogic
:A type-safe class containing the core business logic of the cluster.
Manages all attribute storage.
Should be thoroughly unit-tested.
ClusterImplementation
:Implements the
ServerClusterInterface
(often by deriving fromDefaultServerCluster
).Acts as a translation layer between the data model (encoders/decoders) and the
ClusterLogic
.
ClusterDriver
(orDelegate
):An optional interface providing callbacks to the application for cluster interactions. We recommend the term
Driver
to avoid confusion with the overloaded termDelegate
.
BUILD file layout#
The description below will describe build files under
src/app/clusters/<cluster-directory>/
. You are expected to have the following
items:
BUILD.gn
#
This file will contain a target that is named <cluster-directory>
, usually a
source_set
. This file gets referenced from
src/app/chip_data_model.gni
by adding a dependency as deps += [ "${_app_root}/clusters/${cluster}" ]
, so
the default target name is important.
app_config_dependent_sources
#
There are two code generation integration support files: one for GN
and one
for CMake
. The way these work is that
chip_data_model.gni
/chip_data_model.cmake
will include these files and
bundle ALL referenced sources into ONE SINGLE SOURCE SET, together with
ember code-generated settings (e.g. endpoint_config.h
and similar files that
are application-specific)
As a result, there will be a difference between .gni
and .cmake
:
app_config_dependent_sources.gni
will typically just containCodegenIntegration.cpp
and any other helper/compatibility layers (e.g.CodegenIntegration.h
if applicable)app_config_dependent_sources.cmake
will contain all the files that the.gni
file contains PLUS any dependencies that theBUILD.gn
would pull in but cmake would not (i.e. dependencies not in thelibCHIP
builds). These extra files are often the*.h/*.cpp
files that were in theBUILD.gn
source set.
EXAMPLE taken from (src/app/clusters/basic-information):
# BUILD.gn
import("//build_overrides/build.gni")
import("//build_overrides/chip.gni")
source_set("basic-information") {
sources = [ ... ]
public_deps = [ ... ]
}
# app_config_dependent_sources.gni
app_config_dependent_sources = [ "CodegenIntegration.cpp" ]
# app_config_dependent_sources.cmake
# This block adds the codegen integration sources, similar to app_config_dependent_sources.gni
TARGET_SOURCES(
${APP_TARGET}
PRIVATE
"${CLUSTER_DIR}/CodegenIntegration.cpp"
)
# These are the things that BUILD.gn dependencies would pull
TARGET_SOURCES(
${APP_TARGET}
PRIVATE
"${CLUSTER_DIR}/BasicInformationCluster.cpp"
"${CLUSTER_DIR}/BasicInformationCluster.h"
)
Implementation Details#
Attribute and Feature Handling#
Your implementation must correctly report which attributes and commands are available based on the enabled features and optional items.
Use a feature map to control elements dependent on features.
Use boolean flags or
BitFlags
for purely optional elements.Ensure your unit tests cover different combinations of enabled features and optional attributes/commands.
Attribute Change Notifications#
For subscriptions to work correctly, you must notify the system whenever an attribute’s value changes.
The
Startup
method of your cluster receives aServerClusterContext
.Use the context to call
interactionContext->dataModelChangeListener->MarkDirty(path)
. ANotifyAttributeChanged
helper exists for paths managed by this cluster.For write implementations, you can use
NotifyAttributeChangedIfSuccess
together with a separateWriteImpl
such that any successful attribute write will notify.Canonical example code would look like:
DataModel::ActionReturnStatus SomeCluster::WriteAttribute(const DataModel::WriteAttributeRequest & request, AttributeValueDecoder & decoder) { // Delegate everything to WriteImpl. If write succeeds, notify that the attribute changed. return NotifyAttributeChangedIfSuccess(request.path.mAttributeId, WriteImpl(request, decoder)); }
For the
NotifyAttributeChangedIfSuccess
ensure that WriteImpl is returning ActionReturnStatus::FixedStatus::kWriteSuccessNoOp when no notification should be sent (e.g. write was anoop
because existing value was already the same).Canonical example is:
VerifyOrReturnValue(mValue != value, ActionReturnStatus::FixedStatus::kWriteSuccessNoOp);
Persistent Storage#
Attributes: For scalar attribute values, use
AttributePersistence
fromsrc/app/persistence/AttributePersistence.h
. TheServerClusterContext
provides anAttributePersistenceProvider
.General Storage: For non-attribute data, the context provides a
PersistentStorageDelegate
.
Optimizing for Flash/RAM#
For common or large clusters, you may need to optimize for resource usage.
Consider using C++
templates to compile-time select features and attributes,
which can significantly reduce flash and RAM footprint.
Unit Testing#
Unit tests should reside in
src/app/clusters/<cluster-name>/tests/
.At a minimum,
ClusterLogic
should be fully tested, including its behavior with different feature configurations.ClusterImplementation
can also be unit-tested if its logic is complex. Otherwise, integration tests should provide sufficient coverage.
Part 3: Build and Application Integration#
Build System Integration#
The build system maps cluster names to their source directories. Add your new cluster to this mapping:
Edit
src/app/zap_cluster_list.json
and add an entry for your cluster, pointing to the directory you created.
Application Integration (CodegenIntegration.cpp
)#
To integrate your cluster with an application’s .zap
file configuration, you
need to bridge the gap between the statically generated code and your C++
implementation.
Create
CodegenIntegration.cpp
: This file will contain the integration logic.Create Build Files: Add
app_config_dependent_sources.gni
andapp_config_dependent_sources.cmake
to your cluster directory. These files should listCodegenIntegration.cpp
and its dependencies. See existing clusters for examples.Use Generated Configuration: The code generator creates a header file at
<app/static-cluster-config/<cluster-name>.h
that provides static, application-specific configuration. Use this to initialize your cluster correctly for each endpoint.Implement Callbacks: Implement
Matter<Cluster>ClusterInitCallback(EndpointId)
andMatter<Cluster>ClusterShutdownCallback(EndpointId)
in yourCodegenIntegration.cpp
.Update
config-data.yaml
: To enable these callbacks, add your cluster to theCodeDrivenClusters
array insrc/app/common/templates/config-data.yaml
.Update ZAP Configuration: To prevent the Ember framework from allocating memory for your cluster’s attributes (which are now managed by your
ClusterLogic
), you must:In
src/app/common/templates/config-data.yaml
, consider adding your cluster toCommandHandlerInterfaceOnlyClusters
if it does not need Ember command dispatch.In
src/app/zap-templates/zcl/zcl.json
andzcl-with-test-extensions.json
, add all non-list attributes of your cluster toattributeAccessInterfaceAttributes
. This marks them as externally handled.
Once
config-data.yaml
andzcl.json/zcl-with-test-extensions.json
are updated, run the ZAP regeneration command, like./scripts/run_in_build_env.sh 'scripts/tools/zap_regen_all.py'
Part 4: Example Application and Integration Testing#
Write unit tests to ensure cluster test coverage
Integrate into an Example: Add your cluster to an example application, such as the
all-clusters-app
, to test it in a real-world scenario.use tools such as
chip-tool
ormatter-repl
to manually validate the cluster
Add Integration Tests: Write integration tests to validate the end-to-end functionality of your cluster against the example application.