Writing and Updating Clusters#
This guide provides a comprehensive walkthrough for creating a new Matter cluster implementation, referred to as a “code-driven” cluster.
Overview of the Process#
Writing a new cluster involves the following key stages:
Define the Cluster: Generate or update the cluster definition XML based on the Matter specification.
Implement the Cluster: Write the C++ implementation for the cluster’s logic and data management.
Integrate with Build System: Add the necessary files to integrate the new cluster into the build process.
Integrate with Application: Connect the cluster to an application’s code generation configuration.
Test: Add unit and integration tests to verify the cluster’s functionality.
Part 1: Cluster Definition (XML)#
Clusters are defined based on the Matter specification. The C++ code for them is
generated from XML definitions located in
src/app/zap-templates/zcl/data-model/chip.
Generate XML: To create or update a cluster XML, use Alchemy to parse the specification’s
asciidoc. Manual editing of XML is discouraged, as it is error-prone.Run Code Generation: Once the XML is ready, run the code generation script. It’s often sufficient to run:
./scripts/run_in_build_env.sh 'scripts/tools/zap_regen_all.py'
For more details, see the code generation guide.
Part 2: C++ Implementation#
File Structure#
Create a new directory for your cluster at
src/app/clusters/<cluster-directory>/. This directory will house the cluster
implementation and its unit tests.
For zap-based support, the directory mapping is defined in
src/app/zap_cluster_list.json
under the ServerDirectories key. This maps the UPPER_SNAKE_CASE define of
the cluster to the directory name under src/app/clusters.
Naming conventions#
Names vary, however to be consistent with most of the existing code use:
cluster-name-serverfor the cluster directory nameClusterNameSnakeCluster.h/cppfor theServerClusterInterfaceimplementationClusterNameSnakeLogic.h/cppfor theLogicimplementation if applicable
Recommended Modular Layout#
For better testability and maintainability, we recommend splitting the implementation into logical components. The Software Diagnostics cluster is a good example of this pattern.
ClusterLogic:A type-safe class containing the core business logic of the cluster.
Manages all attribute storage.
Should be thoroughly unit-tested.
ClusterImplementation:Implements the
ServerClusterInterface(often by deriving fromDefaultServerCluster).Acts as a translation layer between the data model (encoders/decoders) and the
ClusterLogic.
ClusterDriver(orDelegate):An optional interface providing callbacks to the application for cluster interactions. We recommend the term
Driverto avoid confusion with the overloaded termDelegate.
Choosing the Right Implementation Pattern#
When implementing a cluster, you have two primary architectural choices: a combined implementation and a modular implementation. The best choice depends on the cluster’s complexity and the constraints of the target device, particularly flash and RAM usage.
Combined Implementation (Logic and Data in One Class):
Description: In this pattern, the cluster’s logic, data storage, and
ServerClusterInterfaceimplementation are all contained within a single class.Pros: Simpler to write and can result in a smaller flash footprint, making it ideal for simple clusters or resource-constrained devices.
Cons: Can be harder to test and maintain as the cluster’s complexity grows.
Example: The Basic Information cluster is a good example of a combined implementation.
Modular Implementation (Logic Separated from Data Model):
Description: This pattern separates the core business logic into a
ClusterLogicclass, while theClusterImplementationclass handles the translation between the data model and the logic.Pros: Promotes better testability, as the
ClusterLogiccan be unit-tested in isolation. It is also more maintainable for complex clusters.Cons: May use slightly more flash and RAM due to the additional class and virtual function calls.
Example: The Software Diagnostics cluster demonstrates a modular implementation.
Recommendation: Start with a combined implementation for simpler clusters. If the cluster’s logic is complex or if you need to maximize testability, choose the modular approach.
BUILD file layout#
The description below will describe build files under
src/app/clusters/<cluster-directory>/. You are expected to have the following
items:
BUILD.gn#
This file will contain a target that is named <cluster-directory>, usually a
source_set. This file gets referenced from
src/app/chip_data_model.gni
by adding a dependency as deps += [ "${_app_root}/clusters/${cluster}" ], so
the default target name is important.
app_config_dependent_sources#
There are two code generation integration support files: one for GN and one
for CMake. The way these work is that
chip_data_model.gni/chip_data_model.cmake will include these files and
bundle ALL referenced sources into ONE SINGLE SOURCE SET, together with
ember code-generated settings (e.g. endpoint_config.h and similar files that
are application-specific)
As a result, there will be a difference between .gni and .cmake:
app_config_dependent_sources.gniwill typically just containCodegenIntegration.cppand any other helper/compatibility layers (e.g.CodegenIntegration.hif applicable)app_config_dependent_sources.cmakewill contain all the files that the.gnifile contains PLUS any dependencies that theBUILD.gnwould pull in but cmake would not (i.e. dependencies not in thelibCHIPbuilds). These extra files are often the*.h/*.cppfiles that were in theBUILD.gnsource set.
EXAMPLE taken from (src/app/clusters/basic-information):
# BUILD.gn
import("//build_overrides/build.gni")
import("//build_overrides/chip.gni")
source_set("basic-information") {
sources = [ ... ]
public_deps = [ ... ]
}
# app_config_dependent_sources.gni
app_config_dependent_sources = [ "CodegenIntegration.cpp" ]
# app_config_dependent_sources.cmake
# This block adds the codegen integration sources, similar to app_config_dependent_sources.gni
TARGET_SOURCES(
${APP_TARGET}
PRIVATE
"${CLUSTER_DIR}/CodegenIntegration.cpp"
)
# These are the things that BUILD.gn dependencies would pull
TARGET_SOURCES(
${APP_TARGET}
PRIVATE
"${CLUSTER_DIR}/BasicInformationCluster.cpp"
"${CLUSTER_DIR}/BasicInformationCluster.h"
)
Implementation Details#
Attribute and Feature Handling#
Your implementation must correctly report which attributes and commands are available based on the enabled features and optional items.
Use a feature map to control elements dependent on features.
Use boolean flags or
BitFlagsfor purely optional elements.Ensure your unit tests cover different combinations of enabled features and optional attributes/commands.
Attribute Change Notifications#
For subscriptions to work correctly, you must notify the system whenever an attribute’s value changes.
The
Startupmethod of your cluster receives aServerClusterContext.Use the context to call
interactionContext->dataModelChangeListener->MarkDirty(path). ANotifyAttributeChangedhelper exists for paths managed by this cluster.For write implementations, you can use
NotifyAttributeChangedIfSuccesstogether with a separateWriteImplsuch that any successful attribute write will notify.Canonical example code would look like:
DataModel::ActionReturnStatus SomeCluster::WriteAttribute(const DataModel::WriteAttributeRequest & request, AttributeValueDecoder & decoder) { // Delegate everything to WriteImpl. If write succeeds, notify that the attribute changed. return NotifyAttributeChangedIfSuccess(request.path.mAttributeId, WriteImpl(request, decoder)); }
For the
NotifyAttributeChangedIfSuccessensure that WriteImpl is returning ActionReturnStatus::FixedStatus::kWriteSuccessNoOp when no notification should be sent (e.g. write was anoopbecause existing value was already the same).Canonical example is:
VerifyOrReturnValue(mValue != value, ActionReturnStatus::FixedStatus::kWriteSuccessNoOp);
Persistent Storage#
Attributes: For scalar attribute values, use
AttributePersistencefromsrc/app/persistence/AttributePersistence.h. TheServerClusterContextprovides anAttributePersistenceProvider.General Storage: For non-attribute data, the context provides a
PersistentStorageDelegate.
Optimizing for Flash/RAM#
For common or large clusters, you may need to optimize for resource usage.
Consider using C++ templates to compile-time select features and attributes,
which can significantly reduce flash and RAM footprint.
Advanced ServerClusterInterface Details#
While ReadAttribute, WriteAttribute, and InvokeCommand are the most
commonly implemented methods, the ServerClusterInterface has other methods for
more advanced use cases.
List Attribute Writes (ListAttributeWriteNotification)#
This method is an advanced callback for handling large list attributes that may require special handling, such as persisting them to storage in chunks. A typical example of a cluster that might use this is the Binding cluster. For most clusters, the default implementation is sufficient.
Event Permissions (EventInfo)#
You must implement the EventInfo method if your cluster emits any events that
require non-default permissions to be read. For example, an event might require
Administrator privileges. While not common, this should be verified for every
new cluster implementation and checked during code reviews to ensure event
access is correctly restricted.
Accepted vs. Generated Commands#
The distinction between AcceptedCommands and GeneratedCommands can be
understood using a REST API analogy:
AcceptedCommands: These are the “requests” that the server cluster can process. In the Matter specification, these are commands sent from the client to the server (client => server).GeneratedCommands: These are the “responses” that the server cluster can generate after processing an accepted command. In the spec, these are commands sent from the server back to the client (server => client).
These lists are built based on the cluster’s definition in the Matter specification.
Unit Testing#
Unit tests should reside in
src/app/clusters/<cluster-name>/tests/.At a minimum,
ClusterLogicshould be fully tested, including its behavior with different feature configurations.ClusterImplementationcan also be unit-tested if its logic is complex. Otherwise, integration tests should provide sufficient coverage.
Part 3: Build and Application Integration#
Build System Integration#
The build system maps cluster names to their source directories. Add your new cluster to this mapping:
Edit
src/app/zap_cluster_list.jsonand add an entry for your cluster, pointing to the directory you created.
Application Integration (CodegenIntegration.cpp)#
To integrate your cluster with an application’s .zap file configuration, you
need to bridge the gap between the statically generated code and your C++
implementation.
Create
CodegenIntegration.cpp: This file will contain the integration logic.Create Build Files: Add
app_config_dependent_sources.gniandapp_config_dependent_sources.cmaketo your cluster directory. These files should listCodegenIntegration.cppand its dependencies. See existing clusters for examples.Use Generated Configuration: The code generator creates a header file at
<app/static-cluster-config/<cluster-name>.hthat provides static, application-specific configuration. Use this to initialize your cluster correctly for each endpoint.Implement Callbacks: Implement
Matter<Cluster>ClusterInitCallback(EndpointId)andMatter<Cluster>ClusterShutdownCallback(EndpointId)in yourCodegenIntegration.cpp.Update
config-data.yaml: To enable these callbacks, add your cluster to theCodeDrivenClustersarray insrc/app/common/templates/config-data.yaml.Update ZAP Configuration: To prevent the Ember framework from allocating memory for your cluster’s attributes (which are now managed by your
ClusterLogic), you must:In
src/app/common/templates/config-data.yaml, consider adding your cluster toCommandHandlerInterfaceOnlyClustersif it does not need Ember command dispatch.In
src/app/zap-templates/zcl/zcl.jsonandzcl-with-test-extensions.json, add all non-list attributes of your cluster toattributeAccessInterfaceAttributes. This marks them as externally handled.
Once
config-data.yamlandzcl.json/zcl-with-test-extensions.jsonare updated, run the ZAP regeneration command, like./scripts/run_in_build_env.sh 'scripts/tools/zap_regen_all.py'
Part 4: Example Application and Integration Testing#
Write unit tests to ensure cluster test coverage
Integrate into an Example: Add your cluster to an example application, such as the
all-clusters-app, to test it in a real-world scenario.use tools such as
chip-toolormatter-replto manually validate the cluster
Add Integration Tests: Write integration tests to validate the end-to-end functionality of your cluster against the example application.