- Overview
- Tutorials
- Getting started
- Get started with Canton and the JSON Ledger API
- Get Started with Canton, the JSON Ledger API, and TypeScript
- Get started with Canton Network App Dev Quickstart
- Get started with smart contract development
- Basic contracts
- Test templates using Daml scripts
- Build the Daml Archive (.dar) file
- Data types
- Transform contracts using choices
- Add constraints to a contract
- Parties and authority
- Compose choices
- Handle exceptions
- Work with dependencies
- Functional programming 101
- The Daml standard library
- Test Daml contracts
- Next steps
- Application development
- Getting started
- Development how-tos
- Component how-tos
- Explanations
- References
- Application development
- Smart contract development
- Daml language cheat sheet
- Daml language reference
- Daml standard library
- DA.Action.State.Class
- DA.Action.State
- DA.Action
- DA.Assert
- DA.Bifunctor
- DA.Crypto.Text
- DA.Date
- DA.Either
- DA.Exception
- DA.Fail
- DA.Foldable
- DA.Functor
- DA.Internal.Interface.AnyView.Types
- DA.Internal.Interface.AnyView
- DA.List.BuiltinOrder
- DA.List.Total
- DA.List
- DA.Logic
- DA.Map
- DA.Math
- DA.Monoid
- DA.NonEmpty.Types
- DA.NonEmpty
- DA.Numeric
- DA.Optional
- DA.Record
- DA.Semigroup
- DA.Set
- DA.Stack
- DA.Text
- DA.TextMap
- DA.Time
- DA.Traversable
- DA.Tuple
- DA.Validation
- GHC.Show.Text
- GHC.Tuple.Check
- Prelude
- Smart contract upgrading reference
- Glossary of concepts
Secure¶
PQS application is a client to backend services (ledger and database) as such it needs to respect security settings mandated by those services - TLS and authentication:
TLS¶
Your server-side components (Canton and PostgreSQL) may require TLS to be used. Please refer to their documentation for instructions:
for PostgreSQL see https://www.postgresql.org/docs/current/ssl-tcp.html
for Canton see TLS API Configuration
Once configured, use appropriate values for dedicated parameters:
$ ./scribe.jar pipeline ledger postgres-document \
--source-ledger-tls-cert /path/to/ledger.crt \
--source-ledger-tls-key /path/to/ledger.pem \
--source-ledger-tls-cafile /path/to/ledger.crt \
--target-postgres-tls-cert /path/to/postgres.crt \
--target-postgres-tls-key /path/to/postgres.der \
--target-postgres-tls-cafile /path/to/postgres.crt \
--target-postgres-tls-mode VerifyFull
Ledger authentication¶
To run PQS with authentication you need to turn it on via --source-ledger-auth OAuth
. PQS uses OAuth 2.0 Client
Credentials flow [1].
$ ./scribe.jar pipeline ledger postgres-document \
--source-ledger-auth OAuth \
--pipeline-oauth-clientid my_client_id \
--pipeline-oauth-clientsecret deadbeef \
--pipeline-oauth-cafile ca.crt \
--pipeline-oauth-endpoint https://my-auth-server/token
PQS uses the supplied client credentials (clientid
and clientsecret
) to access the token endpoint
(endpoint
) of the OAuth service of your choice. Optional cafile
parameter is a path to the Certification
Authority certificate used to access the token endpoint. If cafile
is not set, the Java TrustStore is used.
Please make sure you have configured your Daml participant to use authorization
(see Configure authorization service) and an authorization server to
accept your client credentials for grant_type=client_credentials
and scope=daml_ledger_api
.
Audience-based token¶
For Audience-Based Tokens use the --pipeline-oauth-parameters-audience
parameter:
$ ./scribe.jar pipeline ledger postgres-document \
--source-ledger-auth OAuth \
--pipeline-oauth-clientid my_client_id \
--pipeline-oauth-clientsecret deadbeef \
--pipeline-oauth-cafile ca.crt \
--pipeline-oauth-endpoint https://my-auth-server/token \
--pipeline-oauth-scope None \
--pipeline-oauth-parameters-audience https://daml.com/jwt/aud/participant/my_participant_id
Scope-based token¶
For Scope-Based Tokens use the --pipeline-oauth-scope
parameter:
$ ./scribe.jar pipeline ledger postgres-document \
--source-ledger-auth OAuth \
--pipeline-oauth-clientid my_client_id \
--pipeline-oauth-clientsecret deadbeef \
--pipeline-oauth-cafile ca.crt \
--pipeline-oauth-endpoint https://my-auth-server/token \
--pipeline-oauth-scope myScope \
--pipeline-oauth-parameters-audience https://daml.com/jwt/aud/participant/my_participant_id
Note
The default value of the --pipeline-oauth-scope
parameter is daml_ledger_api
. Ledger API requires
daml_ledger_api
in the list of scopes unless custom target scope is configured.
Custom Daml claims tokens¶
PQS authenticates as a user defined through the User Identity Management feature of Canton. Consequently, Custom Daml Claims Access Tokens are not supported. An audience-based or scope-based token must be used instead.
Static access token¶
Alternatively, you can configure PQS to use a static access token (meaning it is not refreshed) using the
--pipeline-oauth-accesstoken
parameter:
$ ./scribe.jar pipeline ledger postgres-document \
--source-ledger-auth OAuth \
--pipeline-oauth-accesstoken my_access_token
Ledger API users and Daml parties¶
PQS connects to a participant (via Ledger API) as a user defined through the User Identity Management feature of Canton.
PQS gets its user identity by providing an OAuth token of that user. After authenticating, the participant has the
authorization information to know what Daml Party data the user is allowed to access. By default, PQS will subscribe
to data for all parties available to PQS’ authenticated user. However, this scope can be limited via the
--pipeline-filter-parties
filter parameter (see Party filtering).
Token expiry¶
JWT tokens [2] have an expiration time. PQS has a mechanism to automatically request a new access token from the Auth
Server, before the old access token expires. To set when PQS should try to request a new access token, use
--pipeline-oauth-preemptexpiry
(default “PT1M” - one minute), meaning: request a new access token one minute
before the current access token expires. This new access token is used for any future Ledger API calls.
However, for streaming calls such as GetUpdates the
access token is part of the request that initiates the streaming. Canton versions prior to 2.9
terminate the
stream with error PERMISSION_DENIED
as soon as the old access token expires to prevent streaming forever based on
the old access token. Versions 2.9+
fail with code ABORTED
and description ACCESS_TOKEN_EXPIRED
and PQS
streams from the offset of the last successfully processed transaction.
PostgreSQL authentication¶
To authenticate to PostgreSQL, use dedicated parameters when launching the pipeline:
$ ./scribe.jar pipeline ledger postgres-document \
--target-postgres-password "${YOUR_DB_PASSWORD}" \
--target-postgres-username "${YOUR_DB_USER}"
Hardening recommendations¶
Use TLS: Always use TLS to encrypt data being transmitted to/from the Canton Participant and the PostgreSQL
datastore. This is especially important when dealing with sensitive information. Ensure that only secure TLS versions
are used (e.g. TLS 1.2+
) and that strong cipher suites are configured. Client authentication should be used to
ensure that only trusted clients can connect to the Canton Participant and PostgreSQL datastore, such that network
level security is not overly relied upon.
Logging: Ensure that logging is configured (see Logging) to avoid logging sensitive information. This
includes transaction details and metadata (eg. size) that is revealed in TRACE
and DEBUG
levels. These log
levels should be used with caution, and only in a controlled environment. In production, we recommend using INFO
or WARN
levels.
Ledger Authorization: Follow the principle of least privilege when granting access to the Canton Participant Ledger API User that PQS uses:
Only
canReadAs
authorization to only Party’s that it requires; OROnly
readAsAnyParty
authorization if PQS is used as a participant-wide service and needs access to all Party data.No
canActAs
authorization (to submit commands). PQS has no capability to submit commands to the Canton Ledger APINo
admin
access to the Canton Ledger API.
Database Access: The datastore contains all ledger information obtained from the Canton Participant. Ensure that the database users are tightly controlled, and set to the minimum required privileges:
Operational user: SQL Insert/Update/Delete/Copy - so PQS can maintain the datastore contents. No DDL rights should be in place - PQS does not need to change the database schema.
Others user: No write access should be granted to any other user. Excessive reading my other clients (leading to overload) should be avoided, to ensure that PQS has sufficient resources to operate.
Admin user: PQS will need to be able to apply schema changes to the database when deploying a new version containing database changes. This should be a separate user with the minimum required privileges to perform these operations. Also, redaction operations performed by an administrator will require Select/Update rights to the database.
Network Security: Ensure that the network security is configured to restrict access to only essential connections:
Using firewalls to restrict access to PostgreSQL database, Canton Participant and auth server.
Using firewalls to restrict access to PQS. Even though it does not listen for any client connections, there are listing TCP ports for health and diagnostic purposes. By default, health and diagnostic ports are only accessible from
localhost
. Before changing this configuration, ensure that all hosts granted network level access are necessary & trusted: to mitigate the risk of exploit or exposing sensitive information.
Runtime Environment: Ensure that the runtime environment is secure:
Keeping the operating system and all software up to date with security patches.
Using a firewall to restrict access to the PQS process (and associated PostgreSQL datastore).
Monitoring logs for any suspicious activity, as well as errors and warnings.
Validate that the runtime enforces least privilege principles, and contains only intended tools. We recommend using a minimal Java runtime environment (JRE) to reduce the attack surface:
Use a minimal JRE (not JDK), for example Amazon Corretto or Azul Zulu, to reduce the attack surface.
Consider allowing only the
jdk.attach
[3] module, in your chosen JRE. This enables the running process to produce more accurate stack traces when diagnostics are extracted.Use a security manager to restrict the infrastructure permissions of the PQS process, if possible.
Use a containerized environment (e.g. Docker) to isolate the PQS process from the host system and other processes.
Observability: Ensure that the environment is monitored for logs, metrics and alerts of events of interest (see Observe).
PQS log is monitored for errors and warnings, to ensure these do not go unnoticed.
PQS runtime is monitored for disk, memory and other JRE concerns such as heap, garbage collection cycle rates.
PQS health endpoint is polled regularly to verify availability.
PostgreSQL and Canton Participant servers are similarly monitored.
Database Backups: Ensure that the database backups are encrypted and stored securely. This is especially important when dealing with sensitive information. The database should be backed up regularly, and the backup process should be tested to ensure that it works as expected.
Data Retention: Ensure that the data retention policy is in place and enforced. This includes regular purging of old data (eg. PQS Pruning), and ensuring that sensitive information is redacted or deleted as required by your organization’s policies and regulations.
Software Updates: Ensure that the PQS software is kept up to date with the latest security patches and updates. This includes both the PQS software itself and any dependencies it may have.