- Setup
- Overview
- Tutorials
- How Tos
- Reference
- DAR Versions
- API Reference
- Commercials API
- Credential API
- Registry API
- Utility.Registry
- Utility.Registry.V0.Configuration.Instrument
- Utility.Registry.V0.Holding.Allocation
- Utility.Registry.V0.Holding.Burn
- Utility.Registry.V0.Holding.Lock
- Utility.Registry.V0.Holding.Mint
- Utility.Registry.V0.Holding.TokenApiUtils
- Utility.Registry.V0.Holding.Transfer
- Utility.Registry.V0.Holding.Unlock
- Utility.Registry.V0.Types
- Utility.Registry.V0.Util
- Utility.Registry.App
- Utility.Holding
- Utility.Registry
- Operator Backend API
- Releases
Utility Setup¶
In order to deploy the Utility, you will need to complete the following steps:
- Have a fully working validator node (see :ref:`validator-node-setup`)
- Upload the Utility Daml code to your validator node.
- Optionally deploy the Utility UI.
Upload the Utility Daml code¶
The Utility Daml code is shipped as a set of DAR packages, which need to be uploaded to the node.
Setup¶
For security purposes, the gRPC ports are not exposed by default in your node. In order to upload the DAR files, you will need to first temporarily port forward the participant service to your local machine (specifically, the Admin API port).
To connect to services running in k8s, use k8s port forwarding to allocate an internal port in a running pod to a local port.
The Canton Admin API is usually exposed on port 5002 of the participant
service, which facilitates gRPC communication. In one terminal session where kubectl
is connected to the running cluster, port forward as following:
kubectl port-forward -n <your-validator-namespace> <your-participant-pod-name> 5002:5002
The ports specified in the command above are: <local_port>:<container_port>
To check that this is working, run a gRPC command against the forwarded port on your local machine (in this example, list all dars on the participant):
grpcurl -plaintext localhost:5002 com.digitalasset.canton.admin.participant.v30.PackageService.ListDars
Upload the DARS¶
The bundle files containing the DAR packages for the Utility are available in JFrog. Download version 0.7.4
of the bundle.
They can be uploaded to a participant node using the corresponding gRPC Admin-API endpoint.
Here is an example upload-dar.sh script for uploading DARs:
#!/bin/bash
DAR_DIRECTORY="dars"
jwt_token="<enter token>"
canton_admin_api_url="${PARTICIPANT_HOST}:${CANTON_ADMIN_GRPC_PORT}"
canton_admin_api_grpc_base_service="com.digitalasset.canton.admin.participant.v30"
canton_admin_api_grpc_package_service=${canton_admin_api_grpc_base_service}".PackageService"
json() {
declare input=${1:-$(</dev/stdin)}
printf '%s' "${input}" | jq -c .
}
upload_dar() {
local dar_directory=$1
local dar=$2
echo "Uploading dar to ledger: ${dar}"
local base64_encoded_dar=$(base64 -w 0 ${dar_directory}/${dar})
# The base64 command may require adopting to your unix environment.
# The above example is based on the GNU base64 implementation.
# The BSD version would look something like:
# local base64_encoded_dar=$(base64 -i ${dar_directory}/${dar} | tr -d '\n')
local grpc_upload_dar_request="{
\"dars\": [
{
\"bytes\": \"${base64_encoded_dar}\",
\"description\": \"${dar}\"
}
],
\"vet_all_packages\": true,
\"synchronize_vetting\": true
}"
grpcurl \
-plaintext \
-H "Authorization: Bearer ${jwt_token}" \
-d @ \
${canton_admin_api_url} ${canton_admin_api_grpc_package_service}.UploadDar \
< <(echo ${grpc_upload_dar_request} | json)
echo "Dar '${dar}' successfully uploaded"
}
# Upload all dars from the specified directory
if [ -d ${DAR_DIRECTORY} ]; then
# List all files in the directory
dars=$(ls "${DAR_DIRECTORY}")
# Loop over each dar file
for dar in ${dars}; do
upload_dar ${DAR_DIRECTORY} ${dar}
done
else
echo "Directory not found: ${DAR_DIRECTORY}"
fi
Deploy the Utility UI¶
The Utility UI is an optional component that clients can deploy to execute the Utility workflows.
Warning
In order to be able to deploy the UI, currently the audience
of both the Participant and the Validator have to be set to the same value.
Note
In order to use the UI, traffic routing on your cluster is required.
Create an OIDC application for the UI Frontend¶
Follow the same steps for setting up a auth0 or an External OIDC Provider. Specifically, create a new application similar to the wallet/cns named ‘Utility UI’. Once this has been created, update your AUTH_CLIENT_ID enviroment variable to be the Client ID of that new application.
Determine the Utility operator party¶
In order to deploy the UI, you will need to specify the operator party. Depending on the environment, the Utility operator party is:
Environment |
Utility Operator Party |
---|---|
DevNet |
auth0_007c65f857f1c3d599cb6df73775::1220d2d732d042c281cee80f483ab80f3cbaa4782860ed5f4dc228ab03dedd2ee8f9 |
TestNet |
auth0_007c66019993301e3ed49d0e36e9::12206268795b181eafd1432facbb3a3c5711f1f8b743ea0e9c0050b32126b33071fa |
MainNet |
auth0_007c6643538f2eadd3e573dd05b9::12205bcc106efa0eaa7f18dc491e5c6f5fb9b0cc68dc110ae66f4ed6467475d7c78e |
Download the Docker image¶
The Utility UI resides in the canton-network-utility-docker
JFrog repository.
The Docker images are accessible via digitalasset-canton-network-utility-docker.jfrog.io
. For example, version 0.7.4
is available via digitalasset-canton-network-utility-docker.jfrog.io/frontend:0.7.4
.
Note that you will have to log into this repository before attempting to pull down the image:
docker login digitalasset-canton-network-utility-docker.jfrog.io -u "<user_name>" -p "<user_password>"
To allow your cluster to pull the necessary artifcats, you will need to create a Kubernetes secret, and patch your service account to use that secret when pulling images from that namespace. For example:
kubectl create secret docker-registry utility-cred \
--docker-server=digitalasset-canton-network-utility-docker.jfrog.io \
--docker-username=${ARTIFACTORY_USER} \
--docker-password=${ARTIFACTORY_PASSWORD} \
-n $NAMESPACE
kubectl patch serviceaccount default -n $NAMESPACE \
-p '{"imagePullSecrets": [{"name": "utility-cred"}]}'
Deploy¶
The UI docker image expects the following environment variables to be set in the container:
Environment Variable
Description
AUTH_AUTHORITY
Your OIDC compatible IAM URL. Example: https://my-tenant.eu.auth0.com
AUTH_CLIENT_ID
The client id of your UI
AUTH_AUDIENCE
The required audience of the participant. Example: https://ledger_api.example.com/
UTILITY_APP_OPERATOR_PARTY_ID
Set as DA’s operator party for the target environment (see “Determine the Utility operator party” above).
Utilising your Traffic Management of choice, you need to configure routing for the following requests (assuming ports are the default set in the CN helm charts):
/api/validator/
route to
validator-app
on port5003
Should be identical to the configuration required for the Wallet and ANS UIs
/api/json-api/
- route to the participant on port
7575
match:
^/api/json-api(/|$)(.*)$
rewrite:
/\2
- Starting from version
0.3.5
, if you would like to avoid the path rewrite above, you can set the helm chart value ofjsonApiServerPathPrefix
to be/api/json-api
in theparticipant-values.yaml
file when you are setting up your validator node.
Details on how to set up the validator node can be found in the Validator node setup documentation.
The container port for the frontend is 8080
Example deployment manifest¶
Here is an example deployment (This is the same regardless of Ingress/Load Balancer/Traffic Manager):
# Create the deployment YAML file
cat <<EOF > ui-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: utility-ui
name: utility-ui
namespace: $NAMESPACE
spec:
replicas: 1
selector:
matchLabels:
app: utility-ui
strategy:
type: Recreate
template:
metadata:
labels:
app: utility-ui
spec:
containers:
- name: utility-ui
image: "$IMAGE_LOCATION:$IMAGE_VERSION"
env:
- name: AUTH_AUTHORITY
value: ""
- name: AUTH_CLIENT_ID
value: ""
- name: AUTH_AUDIENCE
value: "https://$HOST_ENV_VARIABLE"
- name: UTILITY_APP_OPERATOR_PARTY_ID
value: "$OPERATOR_ID"
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
requests:
cpu: 0.1
memory: 240Mi
limits:
cpu: 1
memory: 1536Mi
==-
apiVersion: v1
kind: Service
metadata:
name: utility-ui
namespace: $NAMESPACE
spec:
selector:
app: utility-ui
ports:
- name: http
port: 8080
protocol: TCP
EOF
Example Ingresses¶
The following are ingresses that have been used/tested internally or provided by clients. These may not work directly out of the box for your environment, but should provide valuable starting points/reference points to assist in configuring your Ingress.
Example Nginx Ingress (GCP):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: utility-ingress
namespace: $NAMESPACE
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- utility.${HOST_ENV_VARIABLE}
- secretName: ${SECRET_NAME}
ingressClassName: nginx
rules:
- host: utility.${HOST_ENV_VARIABLE}
http:
paths:
- path: /()(.*)
pathType: Prefix
backend:
service:
name: utility-ui
port:
number: 8080
- path: /api/json-api(/|$)(.*)
pathType: Prefix
backend:
service:
name: participant
port:
number: 7575
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: utility-ingress-validator
namespace: $NAMESPACE
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.org/rewrites: "serviceName=validator-app rewrite=/api/validator/"
spec:
tls:
- hosts:
- utility.${HOST_ENV_VARIABLE}
secretName: ${SECRET_NAME}
ingressClassName: nginx
rules:
- host: utility.${HOST_ENV_VARIABLE}
http:
paths:
- path: /api/validator/
pathType: Prefix
backend:
service:
name: validator-app
port:
number: 5003
Example Nginx reverse Proxy (AWS): ::
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-reverse-proxy
namespace: validator
spec:
replicas: 1
selector:
matchLabels:
app: nginx-reverse-proxy
template:
metadata:
labels:
app: nginx-reverse-proxy
spec:
containers:
- name: nginx
image: nginx:1.27.3
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
# ConfigMap where the actual routing is defined
volumes:
- name: config-volume
configMap:
name: nginx-config
---
apiVersion: v1
kind: Service
metadata:
name: nginx-reverse-proxy
namespace: validator
spec:
selector:
app: nginx-reverse-proxy
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort #<-- Because ALB seems to prefer this (https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1695)
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: validator
data:
nginx.conf: |
events {}
http {
server {
listen 80;
# Requirements for Utility UI
location /api/json-api/ {
# Proxy the request to the participant on port 7575
proxy_pass http://participant.validator.svc.cluster.local:7575/;
}
}
}
Example Nginx Ingress(AWS): ::
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: validator-ingress-utility-api
namespace: validator
annotations:
alb.ingress.kubernetes.io/certificate-arn: "{{ cert_arn }}"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: "{{ alb_group }}"
alb.ingress.kubernetes.io/healthcheck-path: /api/validator/readyz
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200-399'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
alb.ingress.kubernetes.io/group.order: '7'
spec:
ingressClassName: alb
rules:
- host: "utility.validator.{{ validator_hostname }}"
http:
paths:
- path: /api/validator/*
pathType: ImplementationSpecific
backend:
service:
name: validator-app
port:
number: 5003
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: validator-ingress-utility-participant-api
namespace: validator
annotations:
alb.ingress.kubernetes.io/certificate-arn: "{{ cert_arn }}"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: "{{ alb_group }}"
alb.ingress.kubernetes.io/healthcheck-path: /api/json-api/readyz
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200-399'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
alb.ingress.kubernetes.io/group.order: '8'
spec:
ingressClassName: alb
rules:
- host: "utility.validator.{{ validator_hostname }}"
http:
paths:
- path: /api/json-api/*
pathType: ImplementationSpecific
backend:
service:
name: nginx-reverse-proxy
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: validator-ingress-utility
namespace: validator
annotations:
alb.ingress.kubernetes.io/certificate-arn: "{{ cert_arn }}"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: "{{ alb_group }}"
alb.ingress.kubernetes.io/group.order: '9'
spec:
ingressClassName: alb
rules:
- host: "utility.validator.{{ validator_hostname }}"
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: utility-ui
port:
number: 8080