package util
- Alphabetic
- By Inheritance
- util
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Package Members
- package collection
- package retry
Type Members
- trait BatchAggregator[A, B] extends AnyRef
This batch aggregator exposes a BatchAggregator.run method that allows for batching scala.concurrent.Future computations, defined by a BatchAggregator.Processor.
This batch aggregator exposes a BatchAggregator.run method that allows for batching scala.concurrent.Future computations, defined by a BatchAggregator.Processor.
Note: it is required that
getter
andbatchGetter
do not throw an exception. If they do, the number of in-flight requests could fail to be decremented which would result in degraded performance or even prevent calls to the getters. - class BatchAggregatorImpl[A, B] extends BatchAggregator[A, B]
- trait BatchAggregatorUS[A, B] extends AnyRef
This batch aggregator exposes a BatchAggregatorUS.run method that allows for batching com.digitalasset.canton.lifecycle.FutureUnlessShutdown computations, defined by a BatchAggregatorUS.ProcessorUS.
This batch aggregator exposes a BatchAggregatorUS.run method that allows for batching com.digitalasset.canton.lifecycle.FutureUnlessShutdown computations, defined by a BatchAggregatorUS.ProcessorUS.
Note: it is required that
getter
andbatchGetter
do not throw an exception. If they do, the number of in-flight requests could fail to be decremented which would result in degraded performance or even prevent calls to the getters. - class BatchAggregatorUSImpl[A, B] extends BatchAggregatorUS[A, B]
- final case class ByteString190 extends LengthLimitedByteString with Product with Serializable
- final case class ByteString256 extends LengthLimitedByteString with Product with Serializable
- final case class ByteString4096 extends LengthLimitedByteString with Product with Serializable
- final case class ByteString6144 extends LengthLimitedByteString with Product with Serializable
- final case class BytesUnit(bytes: Long) extends Product with Serializable
- sealed abstract class Checked[+A, +N, +R] extends Product with Serializable
A monad for aborting and non-aborting errors.
A monad for aborting and non-aborting errors. Non-aborting errors are accumulated in a cats.data.Chain until the first aborting error is hit. You can think of com.digitalasset.canton.util.Checked as an extension of
Either
to also support errors that should not cause the computation to abort.- A
Type of aborting errors
- N
Type of non-aborting errors
- R
Result type of the monad
- final case class CheckedT[F[_], A, N, R](value: F[Checked[A, N, R]]) extends Product with Serializable
Monad Transformer for Checked, allowing the effect of a monad
F
to be combined with the aborting and non-aborting failure effect of Checked.Monad Transformer for Checked, allowing the effect of a monad
F
to be combined with the aborting and non-aborting failure effect of Checked. Similar to cats.data.EitherT.- Annotations
- @FutureTransformer(transformedTypeArgumentPosition = 0)
- trait CheckedTInstances extends CheckedTInstances1
- trait CheckedTInstances1 extends CheckedTInstances2
- trait CheckedTInstances2 extends AnyRef
- final case class Ctx[+Context, +Value](context: Context, value: Value, telemetryContext: TelemetryContext = NoOpTelemetryContext) extends Product with Serializable
Ctx wraps a value with some contextual information.
- sealed trait FailureMode extends AnyRef
Determines how the queue reacts to failures of previous tasks.
- class FlushFuture extends HasFlushFuture
Stand-alone implementation of HasFlushFuture
- trait FromByteString[T] extends AnyRef
- trait HasFlushFuture extends NamedLogging
Provides a single flush scala.concurrent.Future that runs asynchronously.
Provides a single flush scala.concurrent.Future that runs asynchronously. Tasks can be chained onto the flush future, although they will not run sequentially.
- final class LazyValWithContext[T, Context] extends AnyRef
"Implements" a
lazy val
field whose initialization expression can refer to implicit context information of typeContext
."Implements" a
lazy val
field whose initialization expression can refer to implicit context information of typeContext
. The "val" is initialized upon the first call to get, using the context information supplied for this call, like alazy val
.Instead of a plain lazy val field without context
class C { lazy val f: T = initializer }
use the following code to pass in aContext
:class C { private[this] val _f: LazyValWithContext[T, Context] = new LazyValWithContext[T, Context](context => initializer) def f(implicit context: Context): T = _f.get }
This class implements the same scheme as how the Scala 2.13 compiler implements
lazy val
s, as explained on https://docs.scala-lang.org/sips/improved-lazy-val-initialization.html (version V1) along with its caveats.- See also
TracedLazyVal To be used when the initializer wants to log something using the logger of the surrounding class
ErrorLoggingLazyVal To be used when the initializer wants to log errors using the logger of the caller
- trait LazyValWithContextCompanion[Context] extends AnyRef
- sealed trait LengthLimitedByteString extends AnyRef
This trait wraps a ByteString that is limited to a certain maximum length.
This trait wraps a ByteString that is limited to a certain maximum length. Classes implementing this trait expose
create
andtryCreate
methods to safely (and non-safely) construct such a ByteString.The canonical use case is ensuring that we don't encrypt more data than the underlying crypto algorithm can: for example, Rsa2048OaepSha256 can only encrypt 190 bytes at a time.
- trait LengthLimitedByteStringCompanion[A <: LengthLimitedByteString] extends AnyRef
Trait that implements method commonly needed in the companion object of an LengthLimitedByteString
- class MessageRecorder extends FlagCloseable with NamedLogging
Persists data for replay tests.
- type NamedLoggingLazyVal[T] = LazyValWithContext[T, NamedLoggingContext]
- trait NoCopy extends AnyRef
Prevents auto-generation of the copy method in a case class.
Prevents auto-generation of the copy method in a case class. Case classes with private constructors typically shouldn't have a copy method.
- class NoOpBatchAggregator[A, B] extends BatchAggregator[A, B]
- class NoOpBatchAggregatorUS[A, B] extends BatchAggregatorUS[A, B]
- final case class OrderedBucketMergeConfig[Name, +Config](threshold: PositiveInt, sources: NonEmpty[Map[Name, Config]]) extends Product with Serializable
- threshold
The threshold of equivalent elements to reach before it can be emitted.
- sources
The configurations to be used with OrderedBucketMergeHubOps.makeSource to create a source.
- class OrderedBucketMergeHub[Name, A, Config, Offset, M] extends GraphStageWithMaterializedValue[FlowShape[OrderedBucketMergeConfig[Name, Config], Output[Name, (Config, Option[M]), A, Offset]], Future[Done]] with NamedLogging
A custom Pekko org.apache.pekko.stream.stage.GraphStage that merges several ordered source streams into one based on those sources reaching a threshold for equivalent elements.
A custom Pekko org.apache.pekko.stream.stage.GraphStage that merges several ordered source streams into one based on those sources reaching a threshold for equivalent elements.
The ordered sources produce elements with totally ordered offsets. For a given threshold
t
, whenevert
different sources have produced equivalent elements for an offset that is higher than the previous offset, the OrderedBucketMergeHub emits the map of all these equivalent elements as the next com.digitalasset.canton.util.OrderedBucketMergeHub.OutputElement to downstream. Elements from the other ordered sources with lower or equal offset that have not yet reached the threshold are dropped.Every correct ordered source should produce the same sequence of offsets. Faulty sources can produce any sequence of elements as they like. The threshold should be set to
F+1
where at mostF
sources are assumed to be faulty, and at least2F+1
ordered sources should be configured. This ensures that theF
faulty ordered sources cannot corrupt the stream nor block it.If this assumption is violated, the OrderedBucketMergeHub may deadlock, as it only looks at the next element of each ordered source (this avoids unbounded buffering and therefore ensures that downstream backpressure reaches the ordered sources). For example, given a threshold of 2 with three ordered sources, two of which are faulty, the first elements of the sources have offsets 1, 2, 3. Suppose that the first ordered source's second element had offset 3 and is equivalent to the third ordered source's first element. Then, by the above definition of merging, the stage could emit the elements with offset 3 and discard those with 1 and 2. However, this is not yet implemented; the stream just does not emit anything. Neither are such deadlocks detected right now. This is because in an asynchronous system, there typically are ordered sources that have not yet delivered their next element, and possibly may never will within useful time, say because they have crashed (which is not considered a fault). In the above example, suppose that the second ordered source had not emitted the element with offset 2. Then it is unknown whether the element with offset 1 should be emitted or not, because we do not know which ordered sources are correct. Suppose we had decided that we drop the elements with offset 1 from a correct ordered source and emit the ones with offset 3 instead, Then the second (delayed, but correct) ordered source can still send an equivalent element with 1, and so the decision of dropping 1 was wrong in hindsight.
The OrderedBucketMergeHub manages the ordered sources. Their configurations and the threshold are coming through the OrderedBucketMergeHub's input stream as a OrderedBucketMergeConfig. As soon as a new OrderedBucketMergeConfig is available, the OrderedBucketMergeHub changes the ordered sources as necessary:
- Ordered sources are identified by their
Name
. - Existing ordered sources whose name does not appear in the new configuration are stopped.
- If a new configuration contains a new name for an ordered source, a new ordered source is
created using
ops
. - If the configuration of an ordered source changes, the previous source is stopped and a new one with the new configuration is created.
The OrderedBucketMergeHub emits com.digitalasset.canton.util.OrderedBucketMergeHub.ControlOutput events to downstream:
- com.digitalasset.canton.util.OrderedBucketMergeHub.NewConfiguration signals the new configuration in place.
- com.digitalasset.canton.util.OrderedBucketMergeHub.ActiveSourceTerminated signals that an ordered source has completed or aborted with an error before it was stopped.
Since configuration changes are consumed eagerly, the OrderedBucketMergeHub buffers these com.digitalasset.canton.util.OrderedBucketMergeHub.ControlOutput events if downstream is not consuming them fast enough. The stream of configuration changes should therefore be slower than downstream; otherwise, the buffer will grow unboundedly and lead to java.lang.OutOfMemoryErrors eventually.
When the configuration stream completes or aborts, all ordered sources are stopped and the output stream completes.
An ordered source is stopped by pulling its org.apache.pekko.stream.KillSwitch and dropping all elements until the source completes or aborts. In particular, the ordered source is not just simply cancelled upon a configuration change or when the configuration stream completes. This allows for properly synchronizing the completion of the OrderedBucketMergeHub with the internal computations happening in the ordered sources. To that end, the OrderedBucketMergeHub materializes to a scala.concurrent.Future that completes when the corresponding futures from all created ordered sources have completed as well as the ordered sources themselves.
If downstream cancels, the OrderedBucketMergeHub cancels all sources and the input port, without draining them. Therefore, the materialized scala.concurrent.Future may or may not complete, depending on the shape of the ordered sources. For example, if the ordered sources' futures are created with a plain org.apache.pekko.stream.scaladsl.FlowOpsMat.watchTermination, it will complete because org.apache.pekko.stream.scaladsl.FlowOpsMat.watchTermination completes immediately when it sees a cancellation. Therefore, it is better to avoid downstream cancellations altogether.
Rationale for the merging logic:
This graph stage is meant to merge the streams of sequenced events from several sequencers on a client node. The operator configures
N
sequencer connections and specifies a thresholdT
. Suppose the operator assumes that at mostF
nodes out ofN
are faulty. So we needF < T
for safety. For liveness, the operator wants to tolerate as many crashes of correct sequencer nodes as feasible. LetC
be the number of tolerated crashes. ThenT <= N - C - F
because faulty sequencers may not deliver any messages. For a fixedF
,T = F + 1
is optimal as we can then tolerateC = N - 2F - 1
crashed sequencer nodes.In other words, if the operator wants to tolerate up to
F
faults and up toC
crashes, then it should setT = F + 1
and configureN = 2F + C + 1
different sequencer connections.If more than
C
sequencers have crashed, then the faulty sequencers can make the client deadlock. The client cannot detect this under the asynchrony assumption.Moreover, the client cannot distinguish either between whether a sequencer node is actively malicious or just accidentally faulty. In particular, if several sequencer nodes deliver inequivalent events, we currently silently drop them. TODO(#14365) Design and implement an alert mechanism
- Ordered sources are identified by their
- trait OrderedBucketMergeHubOps[Name, A, Config, Offset, +M] extends AnyRef
- class RateLimiter extends AnyRef
Utility class that allows clients to keep track of a rate limit.
Utility class that allows clients to keep track of a rate limit.
The decay rate limiter keeps track of the current rate, allowing temporary bursts. This allows temporary bursts at the risk of overloading the system too quickly.
Clients need to tell an instance whenever they intend to start a new task. The instance will inform the client whether the task can be executed while still meeting the rate limit.
Guarantees:
- Maximum burst size: if
checkAndUpdateRate
is calledn
times in parallel, at mostmax 1, maxTasksPerSecond * maxBurstFactor
calls may returntrue
. - Average rate: if
checkAndUpdateRate
is called at a rate of at leastmaxTasksPerSecond
duringn
seconds, then the number of calls that returntrue
divided byn
is roughlymaxTasksPerSecond
.
- Maximum burst size: if
- sealed trait ReassignmentTag[+T] extends Product with Serializable
In reassignment transactions, we deal with two synchronizers: the source synchronizer and the target synchronizer.
In reassignment transactions, we deal with two synchronizers: the source synchronizer and the target synchronizer. The
Source
andTarget
wrappers help differentiate between these two synchronizers, allowing us to manage their specific characteristics, such as protocol versions, static synchronizer parameters, and other synchronizer-specific details. - trait SameReassignmentType[T[_]] extends AnyRef
A type class that ensures the reassignment type remains consistent across multiple parameters of a method.
A type class that ensures the reassignment type remains consistent across multiple parameters of a method. This is useful when dealing with types that represent different reassignment contexts (e.g.,
Source
andTarget
), and we want to enforce that all parameters share the same reassignment context.Example:
def f[F[_] <: ReassignmentTag[_]: SameReassignmentType](i: F[Int], s: F[String]) = ??? // f(Source(1), Target("One")) // This will not compile, as `Source` and `Target` are different reassignment types. // f(Source(1), Source("One")) // This will compile, as both parameters are of the same reassignment type `Source`.
- trait ShowUtil extends ShowSyntax
- class SimpleExecutionQueue extends PrettyPrinting with NamedLogging with FlagCloseableAsync
Functions executed with this class will only run when all previous calls have completed executing.
Functions executed with this class will only run when all previous calls have completed executing. This can be used when async code should not be run concurrently.
The default semantics is that a task is only executed if the previous tasks have completed successfully, i.e., they did not fail nor was the task aborted due to shutdown.
If the queue is shutdown, the tasks' execution is aborted due to shutdown too.
- class SingleUseCell[A] extends AnyRef
This class provides a mutable container for a single value of type
A
.This class provides a mutable container for a single value of type
A
. The value may be put at most once. A SingleUseCell therefore provides the following immutability guarantee: The value of a cell cannot change; once it has been put there, it will remain in the cell. - trait SingletonTraverse[F[_]] extends Traverse[F]
cats.Traverse for containers with at most one element.
- class SnapshottableList[A] extends AnyRef
A mutable list to which elements can be prepended and where snapshots can be taken atomically.
A mutable list to which elements can be prepended and where snapshots can be taken atomically. Both operations are constant-time. Thread safe.
- class StampedLockWithHandle extends AnyRef
A stamped lock that allows passing around a lock handle to better guard methods that should only be called with an active lock.
A stamped lock that allows passing around a lock handle to better guard methods that should only be called with an active lock.
For example:
object Foo { val lock = new StampedLockWithHandle() def bar() = lockWithWriteLockHandle { implicit writeLock => // do something baz() // do more stuff } def baz()(implicit writeLockHandle: lock.WriteLockHandle) = { // do some more stuff } }
In the above example,
baz
cannot be called unless a write lock was acquired specifically only withlock
. - trait Thereafter[F[_]] extends AnyRef
Typeclass for computations with an operation that can run a side effect after the computation has finished.
Typeclass for computations with an operation that can run a side effect after the computation has finished.
The typeclass abstracts the following patterns so that it can be used for types other than scala.concurrent.Future.
future.transform { result => val () = body(result); result } // synchronous body future.transformWith { result => body(result).transform(_ => result) } // asynchronous body
Usage:
import com.digitalasset.canton.util.Thereafter.syntax.* myAsyncComputation.thereafter(result => ...) // synchronous body myAsyncComputation.thereafterF(result => ...) // asynchronous body
It is preferred to similar functions such as scala.concurrent.Future.andThen because it properly chains exceptions from the side-effecting computation back into the original computation.
- F
The computation's type functor.
- trait ThereafterAsync[F[_]] extends Thereafter[F]
Extension of Thereafter that adds the possibility to run an asynchronous piece of code afterwards with proper synchronization and exception propagation.
- type TracedLazyVal[T] = LazyValWithContext[T, TraceContext]
- class TwoPhasePriorityAccumulator[A, B] extends AnyRef
A container with two phases for items with priorities:
A container with two phases for items with priorities:
- In the accumulation phase, items can be added with a priority via TwoPhasePriorityAccumulator.accumulate.
- In the draining phase, items can be removed in priority order via TwoPhasePriorityAccumulator.drain. The order of items with equal priority is unspecified.
TwoPhasePriorityAccumulator.stopAccumulating switches from the accumulation phase to the draining phase. Items can be removed from the container via the handle returned by TwoPhasePriorityAccumulator.accumulate.
- A
The type of items to accumulate
- B
The type of labels for the draining phase
- final case class UByte(signed: Byte) extends NoCopy with Product with Serializable
- trait WithGeneric[+A, B, C[+_]] extends AnyRef
Generic implementation for creating a container of single
A
s paired with a value of typeB
with appropriatemap
andtraverse
implementations. - trait WithGenericCompanion extends AnyRef
Value Members
- val NamedLoggingLazyVal: LazyValWithContextCompanion[NamedLoggingContext]
- val TracedLazyVal: LazyValWithContextCompanion[TraceContext]
- object BatchAggregator
- object BatchAggregatorUS
- object BatchN
Forms dynamically-sized batches based on downstream backpressure.
Forms dynamically-sized batches based on downstream backpressure.
- Under light load, this flow emits batches of size 1.
- Under moderate load, this flow emits batches according to the batch mode:
- MaximizeConcurrency: emits batches of even sizes
- MaximizeBatchSize: emits fewer but full batches
- Under heavy load (dowstream saturated), this flow emits batches of
maxBatchSize
.
moderate load: short intermittent backpressure from downstream that doesn't fill up the maximum batch capacity (maxBatchSize * maxBatchCount) of BatchN.
heavy load: downstream backpressure causes the full batch capacity to fill up and BatchN to exert backpressure to upstream.
Under heavy load or when maxBatchCount == 1, CatchUpMode.MaximizeBatchSize and CatchupMode.MaximizeConcurrency behave the same way, i.e. full batches are emitted.
- object BinaryFileUtil
Write and read byte strings to files.
- object BooleanUtil
- object ByteString190 extends LengthLimitedByteStringCompanion[ByteString190] with Serializable
- object ByteString256 extends LengthLimitedByteStringCompanion[ByteString256] with Serializable
- object ByteString4096 extends LengthLimitedByteStringCompanion[ByteString4096] with Serializable
- object ByteString6144 extends LengthLimitedByteStringCompanion[ByteString6144] with Serializable
- object ByteStringUtil
- object BytesUnit extends Serializable
- object ChainUtil
Provides utility functions for the
cats
implementation of aChain
.Provides utility functions for the
cats
implementation of aChain
. This is a data-structure similar to a List, with constant time prepend and append. Note that theChain
has a performance hit when pattern matching as there is no constant-time uncons operation.Documentation on the
cats
Chain
: https://typelevel.org/cats/datatypes/chain.html. - object Checked extends Serializable
- object CheckedT extends CheckedTInstances with Serializable
- object ContinueAfterFailure extends FailureMode
The queue will continue the execution of tasks even if previous tasks had failed.
- object CrashAfterFailure extends FailureMode
Causes the queue to crash the entire process if a task is scheduled after a previously failed task.
- object Ctx extends Serializable
- object DamlPackageLoader
Wrapper that retrieves parsed packages from a DAR file consumable by the Daml interpreter.
- object DelayUtil extends NamedLogging
Utility to create futures that succeed after a given delay.
Utility to create futures that succeed after a given delay.
Inspired by the odelay library, but with a restricted interface to avoid hazardous effects that could be caused by the use of a global executor service.
TODO(i4245): Replace all usages by Clock.
- object EitherTUtil
Utility functions for the
cats
cats.data.EitherT monad transformer.Utility functions for the
cats
cats.data.EitherT monad transformer. https://typelevel.org/cats/datatypes/eithert.html - object EitherUtil
- object ErrorUtil
- object FutureInstances
- object FutureUnlessShutdownUtil
- object FutureUtil
- object GrpcStreamingUtils
- object HasFlushFuture
- object HexString
Conversion functions to and from hex strings.
- object IdUtil
Contains instances for the
Id
functor. - object IterableUtil
- object JarResourceUtils
Utility methods for loading resource test files.
- object LfTransactionUtil
Helper functions to work with
com.digitalasset.daml.lf.transaction.GenTransaction
.Helper functions to work with
com.digitalasset.daml.lf.transaction.GenTransaction
. Using these helper functions is useful to provide a buffer from upstream changes. - object LoggerUtil
- object MapsUtil
- object MessageRecorder
- object MonadUtil
- object OptionUtil
- object OptionUtils
- object OrderedBucketMergeHub
- object OrderedBucketMergeHubOps
- object PathUtils
- object PekkoUtil extends HasLoggerName
- object PriorityBlockingQueueUtil
- object RangeUtil
- object ReassignmentTag extends Serializable
- object ResourceUtil
Utility code for doing proper resource management.
Utility code for doing proper resource management. A lot of it is based on https://medium.com/@dkomanov/scala-try-with-resources-735baad0fd7d
- object SeqUtil
- object SetCover
- object SetsUtil
- object ShowUtil extends ShowUtil
Utility class for clients who want to make use of pretty printing.
Utility class for clients who want to make use of pretty printing. Import this as follows:
import com.digitalasset.canton.util.ShowUtil.*
In some cases, an import at the top of the file will not make the
show
interpolator available. To work around this, you need to import this INSIDE of the using class.To enforce pretty printing, the
show
interpolator should be used for creating strings. That is,show"s$myComplexObject"
will result in a compile error, if pretty printing is not implemented formyComplexObject
. In contrast,s"$myComplexObject"
will fall back to the default (non-pretty) toString implementation, if pretty printing is not implemented formyComplexObject
. Even if pretty printing is implemented for the typeT
ofmyComplexObject
,s"$myComplexObject"
will not use it, if the compiler fails to inferT: Pretty
. - object SimpleExecutionQueue
- object SingletonTraverse extends Serializable
- object SnapshottableList
- object StackTraceUtil
- object StopAfterFailure extends FailureMode
Causes the queue to not process any further tasks after a previously failed task.
- object TextFileUtil
- object Thereafter
- object ThereafterAsync
- object TrieMapUtil
- object TryUtil
- object TwoPhasePriorityAccumulator
- object UByte extends Serializable
- object VersionUtil