- Overview
- Tutorials
- How Tos
- Download
- Install
- Configure
- Secure
- TLS API Configuration
- Configure API Authentication and Authorization with JWT
- Configure API Limits
- Set Resource Limits
- Crypto key management
- Restrict key usage
- Namespace Key Management
- Key management service (KMS) configuration
- Optimize
- Observe
- Operate
- Initializing node identity manually
- Canton Console
- Synchronizer connections
- High Availability Usage
- Manage Daml packages and archives
- Participant Node pruning
- Party Management
- Party replication
- Decentralized party overview
- Setup an External Party
- Ledger API User Management
- Node Traffic Management
- Identity Management
- Upgrade
- Decommission
- Recover
- Troubleshoot
- Explanations
- Reference
Party replication¶
Party replication is the process of duplicating an existing party onto an additional participant within a single synchronizer. In this process, the participant that already hosts the party is called the source participant, while the new participant is called the target participant.
The operational procedure differs substantially in complexity and risk depending on whether the party you replicate has already been involved in any Daml transaction.
Therefore, onboard your party on a participant, and before you use the party replicate it to other participants following the simple party replication steps.
Otherwise, you must apply an offline party replication procedure.
Note
Party replication is different from party migration. A party migration includes an additional final step, that is removing (or offboarding) the party from its original participant.
Party offboarding, and thus party migration, is currently not supported.
Simple party replication¶
The simplest and safest way to replicate a party is to do so before it becomes a stakeholder in any contract.
Warning
If a party has already participated in any Daml transaction, you must use offline party replication instead.
The simple party replication consists of these steps, follow them in the order they are listed:
Create the party, either in the namespace of a participant or in a dedicated namespace.
Authorize one or more additional participants to host the party.
Use the party.
The following demonstrates these steps using two participants:
@ val source = participant1
source : com.digitalasset.canton.console.LocalParticipantReference = Participant 'participant1'
@ val target = participant2
target : com.digitalasset.canton.console.LocalParticipantReference = Participant 'participant2'
@ val synchronizerId = source.synchronizers.id_of("mysynchronizer")
synchronizerId : SynchronizerId = da::1220a82692ab...
1. Create party¶
Create a party Alice:
@ val alice = source.parties.enable("Alice", synchronizer = Some("mysynchronizer"))
alice : PartyId = Alice::12201ff69b1d...
Note
In this example, the local party Alice is owned by the source participant,
which is a simplification. It means that Alice is registered in the participant’s
namespace, but it is not a requirement.
Alternatively, you can create the party in its own dedicated namespace, or create an external party.
2. Vet packages¶
Vet packages on the target participant(s) before proceeding.
Note
If you are unfamiliar with this process, read this general explanation of package vetting.
3. Multi-host party¶
Party Alice needs to agree to be hosted on the target participant.
Because the source participant owns party Alice, you need to issue the
party-to-participant mapping topology transaction on the source participant.
3.a Replicate party with simultaneous confirmation threshold change (Variant to 3)¶
Note
For external parties, the threshold is defined during the onboarding process already, so this section is not relevant to them.
To change a party’s confirmation threshold, you must use a different procedure for proposing the party-to-participant mapping than previously shown.
This alternative method allows you to perform the replication and update the threshold in a single operation.
The following example continues from the previous one, demonstrating how to replicate
party Alice from the source participant to the newTarget participant while
simultaneously setting the confirmation threshold to three. This operation also sets
the participant permission to confirmation for all three participants that will be
hosting Alice.
@ val newTarget = participant3
newTarget : com.digitalasset.canton.console.LocalParticipantReference = Participant 'participant3'
@ val hostingParticipants = Seq(source, target, newTarget)
hostingParticipants : Seq[com.digitalasset.canton.console.LocalParticipantReference] = List(Participant 'participant1', Participant 'participant2', Participant 'participant3')
@ source.topology.party_to_participant_mappings
.propose(
alice,
newParticipants = hostingParticipants.map(_.id -> ParticipantPermission.Confirmation),
threshold = PositiveInt.three,
store = synchronizerId,
)
res9: SignedTopologyTransaction[TopologyChangeOp, PartyToParticipant] = SignedTopologyTransaction(
TopologyTransaction(
PartyToParticipant(
Alice::12201ff69b1d...,
PositiveNumeric(3),
Vector(
HostingParticipant(PAR::participant1::12201ff69b1d..., Confirmation, false),
HostingParticipant(PAR::participant2::1220a4d7463b..., Confirmation, false),
HostingParticipant(PAR::participant3::1220d6908163..., Confirmation, false)
),
None
),
serial = 3,
operation = Replace,
hash = SHA-256:7249f1511e32...
),
signatures = 12201ff69b1d...,
proposal
)
@ newTarget.topology.party_to_participant_mappings
.propose(
alice,
newParticipants = hostingParticipants.map(_.id -> ParticipantPermission.Confirmation),
threshold = PositiveInt.three,
store = synchronizerId,
)
res10: SignedTopologyTransaction[TopologyChangeOp, PartyToParticipant] = SignedTopologyTransaction(
TopologyTransaction(
PartyToParticipant(
Alice::12201ff69b1d...,
PositiveNumeric(3),
Vector(
HostingParticipant(PAR::participant1::12201ff69b1d..., Confirmation, false),
HostingParticipant(PAR::participant2::1220a4d7463b..., Confirmation, false),
HostingParticipant(PAR::participant3::1220d6908163..., Confirmation, false)
),
None
),
serial = 3,
operation = Replace,
hash = SHA-256:7249f1511e32...
),
signatures = 1220d6908163...,
proposal
)
Offline party replication¶
Offline party replication is a multi-step, manual process.
Before replication can start, both the target participant and the party itself must explicitly consent to the new hosting arrangement.
Afterwards, the replication consists of exporting the party’s Active Contract Set (ACS) from a source participant, and importing it to the target participant.
Note
Connect a single Canton console to both the source and target participants to export and import the party’s ACS file using a single physical machine or environment. Otherwise, you need to securely transfer the ACS export file to the place where you import it to the target participant.
Offline party replication requires you to disconnect the target participant from all synchronizers before importing the party’s ACS. Hence the name offline party replication.
While you onboard the party on the target participant you may detect ACS commitment mismatches. This is expected and resolves itself in time; ignore such errors during the party replication procedure.
Warning
Be advised: You must back up the target participant before you start the ACS import!
This ensures you have a clean recovery point if the ACS import is interrupted (crash, unintended node restart, etc.), or when you otherwise were unable to follow this manual operational steps to completion. Having this backup allows you to safely reset the target participant and still complete the ongoing offline party replication.
Offline party replication steps¶
These are the steps, which you must perform in the exact order they are listed:
Target: Package Vetting – Ensure the target participant vets all required packages.
Source: Data Retention - Ensure the source participant retains data long enough for the export.
Target: Authorization - Target participant authorizes new hosting with the onboarding flag set.
Target: Isolation - Disconnect from all synchronizers and disable auto-reconnect upon restart.
Source: Party Authorization - Party authorizes the replication with the onboarding flag set.
Source: ACS Export - The participant currently hosting the party exports the ACS.
Target: Backup - Back up the target participant before starting the ACS import.
Target: ACS Import - The target participant imports the ACS.
Target: Reconnect - The target participant reconnects to the synchronizers.
Target: Onboarding Flag Clearance - The target participant issues the onboarding flag clearance.
Warning
Offline party replication must be performed with care, strictly following the documented steps in order. Not following the outlined operational flow will result in errors potentially requiring significant manual correction.
This documentation provides a guide. Your environment may require adjustments. Test thoroughly in a test environment before production use.
Scenario description¶
The following steps show how to replicate party alice from the source
participant to a new target participant on the synchronizer mysynchronizer.
The source can be any participant already hosting the party.
@ val source = participant1
source : com.digitalasset.canton.console.LocalParticipantReference = Participant 'participant1'
@ val target = participant2
target : com.digitalasset.canton.console.LocalParticipantReference = Participant 'participant2'
@ val alice = source.parties.enable("Alice", synchronizer = Some("mysynchronizer")) // This command creates a local party. For external parties see the external party onboarding documentation (link found above in this page)
alice : PartyId = Alice::12201ff69b1d...
@ val synchronizerId = source.synchronizers.id_of("mysynchronizer")
synchronizerId : SynchronizerId = da::1220a82692ab...
1. Vet packages¶
Ensure the target participant vets all packages associated with contracts where the party is a stakeholder.
The party alice uses the package CantonExamples which is vetted on the source
participant but not yet on the target participant.
@ val mainPackageId = source.dars.list(filterName = "CantonExamples").head.mainPackageId
mainPackageId : String = "20a62d457c71fc722640bdae97a4ecc0c615df7d5e05bf81f6a37f43d38b092e"
@ target.topology.vetted_packages.list()
.filter(_.item.packages.exists(_.packageId == mainPackageId))
.map(r => (r.context.storeId, r.item.participantId))
res6: Seq[(TopologyStoreId, ParticipantId)] = Vector(
(Synchronizer(id = Right(value = da::1220a82692ab...::34-0)), PAR::participant1::12201ff69b1d...)
)
Hence, upload the missing DAR package to the target participant.
@ target.dars.upload("dars/CantonExamples.dar")
res7: String = "20a62d457c71fc722640bdae97a4ecc0c615df7d5e05bf81f6a37f43d38b092e"
@ target.topology.vetted_packages.list()
.filter(_.item.packages.exists(_.packageId == mainPackageId))
.map(r => (r.context.storeId, r.item.participantId))
res8: Seq[(TopologyStoreId, ParticipantId)] = Vector(
(Synchronizer(id = Right(value = da::1220a82692ab...::34-0)), PAR::participant1::12201ff69b1d...),
(Synchronizer(id = Right(value = da::1220a82692ab...::34-0)), PAR::participant2::1220a4d7463b...)
)
2. Data Retention¶
Ensure that the retention period on the source participant is long enough to cover the entire duration between the following two events:
The party-to-participant mapping topology transaction becoming effective.
The completion of the ACS export from the source participant.
If you are unsure whether the current retention period is sufficient, or as an additional precaution, you should temporarily disable automatic pruning on the source participant.
Retrieve the current automatic pruning schedule. This command returns None if no
schedule is set.
@ val pruningSchedule = source.pruning.get_schedule()
pruningSchedule : Option[PruningSchedule] = Some(value = PruningSchedule(cron = "0 0 20 * * ?", maxDuration = 2h, retention = 720h))
Clear the pruning schedule, disabling the automatic pruning on the source node.
@ source.pruning.clear_schedule()
Warning
Manual pruning cannot be programmatically disabled on the source participant.
Coordinate closely with other operators and ensure that no external automation
triggers pruning until the ACS export is complete.
4. Disconnect target participant from all synchronizers¶
@ target.synchronizers.disconnect_all()
5. Disable auto-reconnect on target participant¶
Ensure the target participant does not automatically reconnect to the synchronizer upon restart.
@ target.synchronizers.config("mysynchronizer")
res13: Option[SynchronizerConnectionConfig] = Some(
value = SynchronizerConnectionConfig(
synchronizer = Synchronizer 'mysynchronizer',
sequencerConnections = SequencerConnections(
connections = Sequencer 'sequencer1' -> GrpcSequencerConnection(
sequencerAlias = Sequencer 'sequencer1',
sequencerId = SEQ::sequencer1::1220cb0a22fb...,
endpoints = http://127.0.0.1:30259
),
sequencer trust threshold = 1,
sequencer liveness margin = 0,
submission request amplification = SubmissionRequestAmplification(factor = 1, patience = 0s),
sequencer connection pool delays = SequencerConnectionPoolDelays(
minRestartDelay = 0.01s,
maxRestartDelay = 10s,
warnValidationDelay = 20s,
subscriptionRequestDelay = 1s
)
),
manualConnect = false
)
)
@ target.synchronizers.modify("mysynchronizer", _.copy(manualConnect=true))
@ target.synchronizers.config("mysynchronizer")
res15: Option[SynchronizerConnectionConfig] = Some(
value = SynchronizerConnectionConfig(
synchronizer = Synchronizer 'mysynchronizer',
sequencerConnections = SequencerConnections(
connections = Sequencer 'sequencer1' -> GrpcSequencerConnection(
sequencerAlias = Sequencer 'sequencer1',
sequencerId = SEQ::sequencer1::1220cb0a22fb...,
endpoints = http://127.0.0.1:30259
),
sequencer trust threshold = 1,
sequencer liveness margin = 0,
submission request amplification = SubmissionRequestAmplification(factor = 1, patience = 0s),
sequencer connection pool delays = SequencerConnectionPoolDelays(
minRestartDelay = 0.01s,
maxRestartDelay = 10s,
warnValidationDelay = 20s,
subscriptionRequestDelay = 1s
)
),
manualConnect = true
)
)
7. Export ACS¶
Export Alice’s ACS from the source participant.
The following command finds internally the ledger offset where party Alice is activated on
the target participant, starting the search from beginOffsetExclusive.
It then exports Alice’s ACS from the source participant at that exact offset, and stores
it in the export file named party_replication.alice.acs.gz.
@ source.parties
.export_party_acs(
party = alice,
synchronizerId = synchronizerId,
targetParticipantId = target.id,
beginOffsetExclusive = beforeActivationOffset,
exportFilePath = "party_replication.alice.acs.gz",
)
8. Optional: Re-enable automatic pruning¶
If you previously disabled automatic pruning on the source participant by following
the data retention step,
you may now re-enable it.
Run the following command using the original configuration parameters you recorded before disabling the schedule:
@ source.pruning.set_schedule("0 0 20 * * ?", 2.hours, 30.days)
9. Back up target participant¶
Warning
Please back up the target participant before importing the ACS!
10. Import ACS¶
Import Alice’s ACS in the target participant:
@ target.parties.import_party_acs("party_replication.alice.acs.gz")
11. Reconnect target participant to synchronizer¶
To later find the ledger offset of the topology transaction where the new hosting
arrangement on the target participant has been authorized, take the current ledger
end offset:
@ val targetLedgerEnd = target.ledger_api.state.end()
targetLedgerEnd : Long = 17L
Now, reconnect that target participant to the synchronizer.
@ target.synchronizers.reconnect_local("mysynchronizer")
res22: Boolean = true
12. Optional: Re-enable auto-reconnect on target participant¶
If you previously disabled auto-reconnect following the earlier step, you may now re-enable it. This is only necessary if the target participant was originally configured to reconnect automatically upon restart.
@ target.synchronizers.modify("mysynchronizer", _.copy(manualConnect=false))
13. Clear the participant’s onboarding flag¶
After the target participant has completed the ACS import and reconnected to the
synchronizer, you must clear the onboarding flag. This signals that the participant
is fully ready to host the party.
There is a dedicated command to accomplish the onboarding flag clearance. It will issue the topology transaction to clear the flag for you, but only when it is safe to do so.
The following command uses the targetLedgerEnd captured in the previous step as the
starting point to internally locate the effective party-to-participant mapping transaction
that has activated alice on the target participant.
@ val (onboarded, minimalSafeClearingTs) = target.parties
.clear_party_onboarding_flag(alice, synchronizerId, targetLedgerEnd)
(onboarded, minimalSafeClearingTs) : (Boolean, Option[com.digitalasset.canton.data.CantonTimestamp]) = (false, Some(value = 2026-03-31T22:30:17.173335Z))
The command returns a tuple indicating the status:
(true, None): The onboarding flag is cleared. Proceed to the next step.(false, Some(CantonTimestamp)): The onboarding flag is still set. Removal is safe only after the indicated timestamp.
If the onboarding flag is still set, you must wait at least until the indicated timestamp
(minimalSafeClearingTs). Only then will calling this command actually result in a
topology transaction to clear the onboarding flag, which becomes effective thereafter.
Because this command is idempotent, you can call it repeatedly. Thus, you may also poll this command until it confirms that the onboarding flag has been cleared. The following snippet demonstrates how this command can be polled.
@ utils.retry_until_true(timeout = 2.minutes, maxWaitPeriod = 30.seconds) {
val (onboarded, _) = target.parties
.clear_party_onboarding_flag(alice, synchronizerId, targetLedgerEnd)
onboarded
}
Note
The timeout is based on the default decision timeout of 1 minute.
Summary¶
You have successfully multi-hosted Alice on source and target participants.