Party replication

Party replication is the process of duplicating an existing party onto an additional participant within a single synchronizer. In this process, the participant that already hosts the party is called the source participant, while the new participant is called the target participant.

The operational procedure differs substantially in complexity and risk depending on whether the party you replicate has already been involved in any Daml transaction.

Therefore, onboard your party on a participant, and before you use the party replicate it to other participants following the simple party replication steps.

Otherwise, you must apply an offline party replication procedure.

Note

Party replication is different from party migration. A party migration includes an additional final step, that is removing (or offboarding) the party from its original participant.

Party offboarding, and thus party migration, is currently not supported.

Party replication authorization

How authorization works

Both the party and the new hosting participant must grant their consent by issuing each a party-to-participant mapping topology transaction. This ensures mutual agreement for the party replication.

External parties

For external parties, changes to the party’s topology must be explicitly authorized with a signature of the external party’s namespace key. Whenever in the how-to authorization from the party is required, the distinction will be made between local and external parties. The procedure for external parties will refer to an abstract function authorizing updates to the party’s party-to-participant mapping:

class HostingParticipant:
    participant_uid: str
    permission: Enums.ParticipantPermission

def update_external_party_hosting(
    party_id: str,
    synchronizer_id: str,
    confirming_threshold: int,
    hosting_participants_add_or_update: [HostingParticipant]
)

An example implementation of this function is given in the external party onboarding documentation. The implementation additionally takes the private key of the party’s namespace and a gRPC channel connected to the admin API of one of the party’s confirming nodes. Those have been omitted in the function declared above for conciseness.

When the source participant is used in this how-to for actions other than authorizing topology changes, one of the existing confirming participants of the external party must be used.

Parties with multiple owners

When a party is owned by a group of members in a decentralized namespace, a minimum number (a defined threshold) of those owners must approve the new hosting arrangement. This threshold is met once enough individual owners each issue their own party-to-participant mapping topology transaction.

Activation

Completing the mutual authorization process activates the party on the target participant.

Simple party replication

The simplest and safest way to replicate a party is to do so before it becomes a stakeholder in any contract.

Warning

If a party has already participated in any Daml transaction, you must use offline party replication instead.

The simple party replication consists of these steps, follow them in the order they are listed:

  1. Create the party, either in the namespace of a participant or in a dedicated namespace.

  2. Vet packages.

  3. Authorize one or more additional participants to host the party.

  4. Use the party.

The following demonstrates these steps using two participants:

@ val source = participant1
    source : com.digitalasset.canton.console.LocalParticipantReference = Participant 'participant1'
@ val target = participant2
    target : com.digitalasset.canton.console.LocalParticipantReference = Participant 'participant2'
@ val synchronizerId = source.synchronizers.id_of("mysynchronizer")
    synchronizerId : SynchronizerId = da::1220a82692ab...

1. Create party

Create a party Alice:

@ val alice = source.parties.enable("Alice", synchronizer = Some("mysynchronizer"))
    alice : PartyId = Alice::12201ff69b1d...

Note

In this example, the local party Alice is owned by the source participant, which is a simplification. It means that Alice is registered in the participant’s namespace, but it is not a requirement.

Alternatively, you can create the party in its own dedicated namespace, or create an external party.

2. Vet packages

Vet packages on the target participant(s) before proceeding.

Note

If you are unfamiliar with this process, read this general explanation of package vetting.

3. Multi-host party

Party Alice needs to agree to be hosted on the target participant.

Because the source participant owns party Alice, you need to issue the party-to-participant mapping topology transaction on the source participant.

Authorize hosting update on the source participant

@ source.topology.party_to_participant_mappings
    .propose_delta(
      party = alice,
      adds = Seq(target.id -> ParticipantPermission.Submission),
      store = synchronizerId,
    )
    res5: SignedTopologyTransaction[TopologyChangeOp, PartyToParticipant] = SignedTopologyTransaction(
      TopologyTransaction(
        PartyToParticipant(
          Alice::12201ff69b1d...,
          PositiveNumeric(1),
          Vector(
            HostingParticipant(PAR::participant1::12201ff69b1d..., Submission, false),
            HostingParticipant(PAR::participant2::1220a4d7463b..., Submission, false)
          ),
          None
        ),
        serial = 2,
        operation = Replace,
        hash = SHA-256:20eef8c6481f...
      ),
      signatures = 12201ff69b1d...,
      proposal
    )

A participant can host a party with different permissions. In this example, the target participant will host party Alice with submission permission, that is party Alice can submit Daml transactions on it.

Authorize hosting update on the target participant

To complete the process, also the target participant needs to agree to newly host Alice. Therefore, you need to issue the same party-to-participant mapping topology transaction on the target participant:

@ target.topology.party_to_participant_mappings
    .propose_delta(
      party = alice,
      adds = Seq(target.id -> ParticipantPermission.Submission),
      store = synchronizerId,
    )
    res6: SignedTopologyTransaction[TopologyChangeOp, PartyToParticipant] = SignedTopologyTransaction(
      TopologyTransaction(
        PartyToParticipant(
          Alice::12201ff69b1d...,
          PositiveNumeric(1),
          Vector(
            HostingParticipant(PAR::participant1::12201ff69b1d..., Submission, false),
            HostingParticipant(PAR::participant2::1220a4d7463b..., Submission, false)
          ),
          None
        ),
        serial = 2,
        operation = Replace,
        hash = SHA-256:20eef8c6481f...
      ),
      signatures = 1220a4d7463b...,
      proposal
    )

Note

The participant permission here must be the same as in the previous step. For external parties in particular this must be either Confirmation or Observation.

Once the party-to-participant mapping takes effect, the replication is complete. This results in party Alice being multi-hosted on both the source and target participants.

To replicate Alice to more participants, repeat the procedure by first vetting the packages on a newTarget participant. Then, perform the replication again using the original source and newTarget participants.

3.a Replicate party with simultaneous confirmation threshold change (Variant to 3)

Note

For external parties, the threshold is defined during the onboarding process already, so this section is not relevant to them.

To change a party’s confirmation threshold, you must use a different procedure for proposing the party-to-participant mapping than previously shown.

This alternative method allows you to perform the replication and update the threshold in a single operation.

The following example continues from the previous one, demonstrating how to replicate party Alice from the source participant to the newTarget participant while simultaneously setting the confirmation threshold to three. This operation also sets the participant permission to confirmation for all three participants that will be hosting Alice.

@ val newTarget = participant3
    newTarget : com.digitalasset.canton.console.LocalParticipantReference = Participant 'participant3'
@ val hostingParticipants = Seq(source, target, newTarget)
    hostingParticipants : Seq[com.digitalasset.canton.console.LocalParticipantReference] = List(Participant 'participant1', Participant 'participant2', Participant 'participant3')
@ source.topology.party_to_participant_mappings
    .propose(
      alice,
      newParticipants = hostingParticipants.map(_.id -> ParticipantPermission.Confirmation),
      threshold = PositiveInt.three,
      store = synchronizerId,
    )
    res9: SignedTopologyTransaction[TopologyChangeOp, PartyToParticipant] = SignedTopologyTransaction(
      TopologyTransaction(
        PartyToParticipant(
          Alice::12201ff69b1d...,
          PositiveNumeric(3),
          Vector(
            HostingParticipant(PAR::participant1::12201ff69b1d..., Confirmation, false),
            HostingParticipant(PAR::participant2::1220a4d7463b..., Confirmation, false),
            HostingParticipant(PAR::participant3::1220d6908163..., Confirmation, false)
          ),
          None
        ),
        serial = 3,
        operation = Replace,
        hash = SHA-256:7249f1511e32...
      ),
      signatures = 12201ff69b1d...,
      proposal
    )
@ newTarget.topology.party_to_participant_mappings
    .propose(
      alice,
      newParticipants = hostingParticipants.map(_.id -> ParticipantPermission.Confirmation),
      threshold = PositiveInt.three,
      store = synchronizerId,
    )
    res10: SignedTopologyTransaction[TopologyChangeOp, PartyToParticipant] = SignedTopologyTransaction(
      TopologyTransaction(
        PartyToParticipant(
          Alice::12201ff69b1d...,
          PositiveNumeric(3),
          Vector(
            HostingParticipant(PAR::participant1::12201ff69b1d..., Confirmation, false),
            HostingParticipant(PAR::participant2::1220a4d7463b..., Confirmation, false),
            HostingParticipant(PAR::participant3::1220d6908163..., Confirmation, false)
          ),
          None
        ),
        serial = 3,
        operation = Replace,
        hash = SHA-256:7249f1511e32...
      ),
      signatures = 1220d6908163...,
      proposal
    )

Offline party replication

Offline party replication is a multi-step, manual process.

Before replication can start, both the target participant and the party itself must explicitly consent to the new hosting arrangement.

Afterwards, the replication consists of exporting the party’s Active Contract Set (ACS) from a source participant, and importing it to the target participant.

Note

  • Connect a single Canton console to both the source and target participants to export and import the party’s ACS file using a single physical machine or environment. Otherwise, you need to securely transfer the ACS export file to the place where you import it to the target participant.

  • Offline party replication requires you to disconnect the target participant from all synchronizers before importing the party’s ACS. Hence the name offline party replication.

  • While you onboard the party on the target participant you may detect ACS commitment mismatches. This is expected and resolves itself in time; ignore such errors during the party replication procedure.

Warning

Be advised: You must back up the target participant before you start the ACS import!

This ensures you have a clean recovery point if the ACS import is interrupted (crash, unintended node restart, etc.), or when you otherwise were unable to follow this manual operational steps to completion. Having this backup allows you to safely reset the target participant and still complete the ongoing offline party replication.

Offline party replication steps

These are the steps, which you must perform in the exact order they are listed:

  1. Target: Package Vetting – Ensure the target participant vets all required packages.

  2. Source: Data Retention - Ensure the source participant retains data long enough for the export.

  3. Target: Authorization - Target participant authorizes new hosting with the onboarding flag set.

  4. Target: Isolation - Disconnect from all synchronizers and disable auto-reconnect upon restart.

  5. Source: Party Authorization - Party authorizes the replication with the onboarding flag set.

  6. Source: ACS Export - The participant currently hosting the party exports the ACS.

  7. Target: Backup - Back up the target participant before starting the ACS import.

  8. Target: ACS Import - The target participant imports the ACS.

  9. Target: Reconnect - The target participant reconnects to the synchronizers.

  10. Target: Onboarding Flag Clearance - The target participant issues the onboarding flag clearance.

Warning

Offline party replication must be performed with care, strictly following the documented steps in order. Not following the outlined operational flow will result in errors potentially requiring significant manual correction.

This documentation provides a guide. Your environment may require adjustments. Test thoroughly in a test environment before production use.

Scenario description

The following steps show how to replicate party alice from the source participant to a new target participant on the synchronizer mysynchronizer. The source can be any participant already hosting the party.

@ val source = participant1
    source : com.digitalasset.canton.console.LocalParticipantReference = Participant 'participant1'
@ val target = participant2
    target : com.digitalasset.canton.console.LocalParticipantReference = Participant 'participant2'
@ val alice = source.parties.enable("Alice", synchronizer = Some("mysynchronizer")) // This command creates a local party. For external parties see the external party onboarding documentation (link found above in this page)
    alice : PartyId = Alice::12201ff69b1d...
@ val synchronizerId = source.synchronizers.id_of("mysynchronizer")
    synchronizerId : SynchronizerId = da::1220a82692ab...

1. Vet packages

Ensure the target participant vets all packages associated with contracts where the party is a stakeholder.

The party alice uses the package CantonExamples which is vetted on the source participant but not yet on the target participant.

@ val mainPackageId = source.dars.list(filterName = "CantonExamples").head.mainPackageId
    mainPackageId : String = "20a62d457c71fc722640bdae97a4ecc0c615df7d5e05bf81f6a37f43d38b092e"
@ target.topology.vetted_packages.list()
    .filter(_.item.packages.exists(_.packageId == mainPackageId))
    .map(r => (r.context.storeId, r.item.participantId))
    res6: Seq[(TopologyStoreId, ParticipantId)] = Vector(
      (Synchronizer(id = Right(value = da::1220a82692ab...::34-0)), PAR::participant1::12201ff69b1d...)
    )

Hence, upload the missing DAR package to the target participant.

@ target.dars.upload("dars/CantonExamples.dar")
    res7: String = "20a62d457c71fc722640bdae97a4ecc0c615df7d5e05bf81f6a37f43d38b092e"
@ target.topology.vetted_packages.list()
    .filter(_.item.packages.exists(_.packageId == mainPackageId))
    .map(r => (r.context.storeId, r.item.participantId))
    res8: Seq[(TopologyStoreId, ParticipantId)] = Vector(
      (Synchronizer(id = Right(value = da::1220a82692ab...::34-0)), PAR::participant1::12201ff69b1d...),
      (Synchronizer(id = Right(value = da::1220a82692ab...::34-0)), PAR::participant2::1220a4d7463b...)
    )

2. Data Retention

Ensure that the retention period on the source participant is long enough to cover the entire duration between the following two events:

  1. The party-to-participant mapping topology transaction becoming effective.

  2. The completion of the ACS export from the source participant.

If you are unsure whether the current retention period is sufficient, or as an additional precaution, you should temporarily disable automatic pruning on the source participant.

Retrieve the current automatic pruning schedule. This command returns None if no schedule is set.

@ val pruningSchedule = source.pruning.get_schedule()
    pruningSchedule : Option[PruningSchedule] = Some(value = PruningSchedule(cron = "0 0 20 * * ?", maxDuration = 2h, retention = 720h))

Clear the pruning schedule, disabling the automatic pruning on the source node.

@ source.pruning.clear_schedule()

Warning

Manual pruning cannot be programmatically disabled on the source participant. Coordinate closely with other operators and ensure that no external automation triggers pruning until the ACS export is complete.

3. Authorize new hosting on the target participant

First, have the target participant agree to host party Alice with the desired participant permission (observation in this example).

Warning

Please ensure the onboarding flag is set with requiresPartyToBeOnboarded = true.

@ target.topology.party_to_participant_mappings
    .propose_delta(
      party = alice,
      adds = Seq((target.id, ParticipantPermission.Observation)),
      store = synchronizerId,
      requiresPartyToBeOnboarded = true
    )
    res11: SignedTopologyTransaction[TopologyChangeOp, PartyToParticipant] = SignedTopologyTransaction(
      TopologyTransaction(
        PartyToParticipant(
          Alice::12201ff69b1d...,
          PositiveNumeric(1),
          Vector(
            HostingParticipant(PAR::participant1::12201ff69b1d..., Submission, false),
            HostingParticipant(PAR::participant2::1220a4d7463b..., Observation, true)
          ),
          None
        ),
        serial = 2,
        operation = Replace,
        hash = SHA-256:4fc27cf93b27...
      ),
      signatures = 1220a4d7463b...,
      proposal
    )

4. Disconnect target participant from all synchronizers

@ target.synchronizers.disconnect_all()

5. Disable auto-reconnect on target participant

Ensure the target participant does not automatically reconnect to the synchronizer upon restart.

@ target.synchronizers.config("mysynchronizer")
    res13: Option[SynchronizerConnectionConfig] = Some(
      value = SynchronizerConnectionConfig(
        synchronizer = Synchronizer 'mysynchronizer',
        sequencerConnections = SequencerConnections(
          connections = Sequencer 'sequencer1' -> GrpcSequencerConnection(
            sequencerAlias = Sequencer 'sequencer1',
            sequencerId = SEQ::sequencer1::1220cb0a22fb...,
            endpoints = http://127.0.0.1:30259
          ),
          sequencer trust threshold = 1,
          sequencer liveness margin = 0,
          submission request amplification = SubmissionRequestAmplification(factor = 1, patience = 0s),
          sequencer connection pool delays = SequencerConnectionPoolDelays(
            minRestartDelay = 0.01s,
            maxRestartDelay = 10s,
            warnValidationDelay = 20s,
            subscriptionRequestDelay = 1s
          )
        ),
        manualConnect = false
      )
    )
@ target.synchronizers.modify("mysynchronizer", _.copy(manualConnect=true))
@ target.synchronizers.config("mysynchronizer")
    res15: Option[SynchronizerConnectionConfig] = Some(
      value = SynchronizerConnectionConfig(
        synchronizer = Synchronizer 'mysynchronizer',
        sequencerConnections = SequencerConnections(
          connections = Sequencer 'sequencer1' -> GrpcSequencerConnection(
            sequencerAlias = Sequencer 'sequencer1',
            sequencerId = SEQ::sequencer1::1220cb0a22fb...,
            endpoints = http://127.0.0.1:30259
          ),
          sequencer trust threshold = 1,
          sequencer liveness margin = 0,
          submission request amplification = SubmissionRequestAmplification(factor = 1, patience = 0s),
          sequencer connection pool delays = SequencerConnectionPoolDelays(
            minRestartDelay = 0.01s,
            maxRestartDelay = 10s,
            warnValidationDelay = 20s,
            subscriptionRequestDelay = 1s
          )
        ),
        manualConnect = true
      )
    )

6. Authorize new hosting for the party

To later find the ledger offset of the topology transaction which authorizes the new hosting arrangement, take the current ledger end offset on the source participant as a starting point:

@ val beforeActivationOffset = source.ledger_api.state.end()
    beforeActivationOffset : Long = 16L

Only after the target participant has been disconnected from all synchronizers, have party Alice agree to be hosted on it.

Warning

Again, please ensure the onboarding flag is set with requiresPartyToBeOnboarded = true for a local party, and with onboarding = HostingParticipant.Onboarding() for external party.

@ source.topology.party_to_participant_mappings
    .propose_delta(
      party = alice,
      adds = Seq((target.id, ParticipantPermission.Observation)),
      store = synchronizerId,
      requiresPartyToBeOnboarded = true
    )
    res17: SignedTopologyTransaction[TopologyChangeOp, PartyToParticipant] = SignedTopologyTransaction(
      TopologyTransaction(
        PartyToParticipant(
          Alice::12201ff69b1d...,
          PositiveNumeric(1),
          Vector(
            HostingParticipant(PAR::participant1::12201ff69b1d..., Submission, false),
            HostingParticipant(PAR::participant2::1220a4d7463b..., Observation, true)
          ),
          None
        ),
        serial = 2,
        operation = Replace,
        hash = SHA-256:4fc27cf93b27...
      ),
      signatures = 12201ff69b1d...,
      proposal
    )

7. Export ACS

Export Alice’s ACS from the source participant.

The following command finds internally the ledger offset where party Alice is activated on the target participant, starting the search from beginOffsetExclusive.

It then exports Alice’s ACS from the source participant at that exact offset, and stores it in the export file named party_replication.alice.acs.gz.

@ source.parties
    .export_party_acs(
      party = alice,
      synchronizerId = synchronizerId,
      targetParticipantId = target.id,
      beginOffsetExclusive = beforeActivationOffset,
      exportFilePath = "party_replication.alice.acs.gz",
    )

8. Optional: Re-enable automatic pruning

If you previously disabled automatic pruning on the source participant by following the data retention step, you may now re-enable it.

Run the following command using the original configuration parameters you recorded before disabling the schedule:

@ source.pruning.set_schedule("0 0 20 * * ?", 2.hours, 30.days)

9. Back up target participant

Warning

Please back up the target participant before importing the ACS!

10. Import ACS

Import Alice’s ACS in the target participant:

@ target.parties.import_party_acs("party_replication.alice.acs.gz")

11. Reconnect target participant to synchronizer

To later find the ledger offset of the topology transaction where the new hosting arrangement on the target participant has been authorized, take the current ledger end offset:

@ val targetLedgerEnd = target.ledger_api.state.end()
    targetLedgerEnd : Long = 17L

Now, reconnect that target participant to the synchronizer.

@ target.synchronizers.reconnect_local("mysynchronizer")
    res22: Boolean = true

12. Optional: Re-enable auto-reconnect on target participant

If you previously disabled auto-reconnect following the earlier step, you may now re-enable it. This is only necessary if the target participant was originally configured to reconnect automatically upon restart.

@ target.synchronizers.modify("mysynchronizer", _.copy(manualConnect=false))

13. Clear the participant’s onboarding flag

After the target participant has completed the ACS import and reconnected to the synchronizer, you must clear the onboarding flag. This signals that the participant is fully ready to host the party.

There is a dedicated command to accomplish the onboarding flag clearance. It will issue the topology transaction to clear the flag for you, but only when it is safe to do so.

The following command uses the targetLedgerEnd captured in the previous step as the starting point to internally locate the effective party-to-participant mapping transaction that has activated alice on the target participant.

@ val (onboarded, minimalSafeClearingTs) = target.parties
    .clear_party_onboarding_flag(alice, synchronizerId, targetLedgerEnd)
    (onboarded, minimalSafeClearingTs) : (Boolean, Option[com.digitalasset.canton.data.CantonTimestamp]) = (false, Some(value = 2026-03-31T22:30:17.173335Z))

The command returns a tuple indicating the status:

  • (true, None): The onboarding flag is cleared. Proceed to the next step.

  • (false, Some(CantonTimestamp)): The onboarding flag is still set. Removal is safe only after the indicated timestamp.

If the onboarding flag is still set, you must wait at least until the indicated timestamp (minimalSafeClearingTs). Only then will calling this command actually result in a topology transaction to clear the onboarding flag, which becomes effective thereafter.

Because this command is idempotent, you can call it repeatedly. Thus, you may also poll this command until it confirms that the onboarding flag has been cleared. The following snippet demonstrates how this command can be polled.

@ utils.retry_until_true(timeout = 2.minutes, maxWaitPeriod = 30.seconds) {
      val (onboarded, _) = target.parties
        .clear_party_onboarding_flag(alice, synchronizerId, targetLedgerEnd)
      onboarded
    }

Note

The timeout is based on the default decision timeout of 1 minute.

Summary

You have successfully multi-hosted Alice on source and target participants.