- Overview
- Tutorials
- How Tos
- Download
- Install
- Configure
- Secure
- TLS API Configuration
- Configure API Authentication and Authorization with JWT
- Configure API Limits
- Set Resource Limits
- Crypto key management
- Restrict key usage
- Namespace Key Management
- Key management service (KMS) configuration
- Optimize
- Observe
- Operate
- Initializing node identity manually
- Canton Console
- Synchronizer connections
- High Availability Usage
- Manage Daml packages and archives
- Participant Node pruning
- Party Management
- Party replication
- Decentralized party overview
- Implementing Multi-Sig in Canton
- Additional Topics
- Managing hosting relationships
- Ledger API User Management
- Node Traffic Management
- Identity Management
- Upgrade
- Decommission
- Recover
- Troubleshoot
- Explanations
- Reference
Decommissioning Canton Nodes and Synchronizer entities¶
This guide assumes general familiarity with Canton, in particular Canton identity management concepts and operations from the Canton console.
Note that, while onboarding new nodes is always possible, a decommissioned node or entity is effectively disposed of and cannot rejoin a Synchronizer. Decommissioning is thus an irreversible operation.
In addition, decommissioning procedures are currently experimental. Regardless, backing up nodes to be decommissioned before decommissioning them is strongly recommended.
Decommissioning a Participant Node¶
Prerequisites¶
Be mindful that making a Participant Node unavailable (by disconnecting it from the Synchronizer or decommissioning it) might block other workflows and/or prevent other parties from exercising Choices on their Contracts.
As an example, consider the following scenario:
Party bank is hosted on Participant Node P1 and party alice is hosted on Participant Node P2.
An active Contract exists with bank as signatory and alice as observer.
P1 is decommissioned.
If bank is not multi-hosted, any attempt by alice to use the Contract fails because bank cannot confirm. The Contract remains active on P2 forever unless purged via the repair service and only non-consuming Choices and fetches can be committed.
Similar considerations apply if P2 were to be decommissioned, even though alice is “only” an observer: if alice is not multi-hosted, the Contract would remain active on P1 until purged via the repair service and only non-consuming Choices and fetches could be committed.
Additionally, when P1 is decommissioned P2 stops receiving ACS commitments from P1, which may prevent pruning. The same applies in reverse if P2 is decommissioned.
Thus, properly decommissioning a Participant Node requires the following high-level steps:
Ensuring that the prerequisites are met: ensure that active Contracts and workflows using them are not “stuck” due to parties required to operate on them becoming unavailable.
Note
More specifically, for a Contract Action to be committed:
For “Create” Actions all stakeholders must be hosted on active Participant Nodes.
For consuming “Exercise” Actions all stakeholders, actors, Choice observers, and Choice authorizers must be hosted on active Participant Nodes.
The exact prerequisites to be met to decommission a Participant Node therefore depend on the design of the Daml app and should be accounted and tested for in the initial Daml design process.
Decommissioning: remove the Participant Node from the topology state.
After that, the Participant Node can be disposed of.
Decommissioning a Participant Node once the prerequisites are met¶
Stop apps from sending commands to the Ledger API of the Participant Node to be decommissioned to avoid failed commands and errors.
Disconnect the Participant Node to be decommissioned from all Synchronizers as described in enabling and disabling connections.
Use the topology.participant_synchronizer_permissions.propose command to fully unauthorize the Participant Node on the Synchronizer:
synchronizerOwners.foreach { synchronizerOwner =>
synchronizerOwner.topology.participant_synchronizer_permissions
.list(synchronizerId, filterUid = participant2.filterString)
.map(_.item.permission)
.foreach(permission =>
synchronizerOwner.topology.participant_synchronizer_permissions
.propose(synchronizerId, participant2.id, permission = permission, change = Remove)
)
}
Finally use the repair.disable_member command to disable the Participant Node being decommissioned in all Sequencers and remove any Sequencer data associated with it.
sequencers.all.foreach(_.repair.disable_member(participant2))