Packages

package util

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. util
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Package Members

  1. package collection
  2. package retry

Type Members

  1. trait BatchAggregator[A, B] extends AnyRef

    This batch aggregator exposes a BatchAggregator.run method that allows for batching scala.concurrent.Future computations, defined by a BatchAggregator.Processor.

    This batch aggregator exposes a BatchAggregator.run method that allows for batching scala.concurrent.Future computations, defined by a BatchAggregator.Processor.

    Note: it is required that getter and batchGetter do not throw an exception. If they do, the number of in-flight requests could fail to be decremented which would result in degraded performance or even prevent calls to the getters.

  2. class BatchAggregatorImpl[A, B] extends BatchAggregator[A, B]
  3. trait BatchAggregatorUS[A, B] extends AnyRef

    This batch aggregator exposes a BatchAggregatorUS.run method that allows for batching com.digitalasset.canton.lifecycle.FutureUnlessShutdown computations, defined by a BatchAggregatorUS.ProcessorUS.

    This batch aggregator exposes a BatchAggregatorUS.run method that allows for batching com.digitalasset.canton.lifecycle.FutureUnlessShutdown computations, defined by a BatchAggregatorUS.ProcessorUS.

    Note: it is required that getter and batchGetter do not throw an exception. If they do, the number of in-flight requests could fail to be decremented which would result in degraded performance or even prevent calls to the getters.

  4. class BatchAggregatorUSImpl[A, B] extends BatchAggregatorUS[A, B]
  5. final case class ByteString190 extends LengthLimitedByteString with Product with Serializable
  6. final case class ByteString256 extends LengthLimitedByteString with Product with Serializable
  7. final case class ByteString4096 extends LengthLimitedByteString with Product with Serializable
  8. final case class ByteString6144 extends LengthLimitedByteString with Product with Serializable
  9. final case class BytesUnit(bytes: Long) extends Product with Serializable
  10. sealed abstract class Checked[+A, +N, +R] extends Product with Serializable

    A monad for aborting and non-aborting errors.

    A monad for aborting and non-aborting errors. Non-aborting errors are accumulated in a cats.data.Chain until the first aborting error is hit. You can think of com.digitalasset.canton.util.Checked as an extension of Either to also support errors that should not cause the computation to abort.

    A

    Type of aborting errors

    N

    Type of non-aborting errors

    R

    Result type of the monad

  11. final case class CheckedT[F[_], A, N, R](value: F[Checked[A, N, R]]) extends Product with Serializable

    Monad Transformer for Checked, allowing the effect of a monad F to be combined with the aborting and non-aborting failure effect of Checked.

    Monad Transformer for Checked, allowing the effect of a monad F to be combined with the aborting and non-aborting failure effect of Checked. Similar to cats.data.EitherT.

    Annotations
    @FutureTransformer(transformedTypeArgumentPosition = 0)
  12. trait CheckedTInstances extends CheckedTInstances1
  13. trait CheckedTInstances1 extends CheckedTInstances2
  14. trait CheckedTInstances2 extends AnyRef
  15. final case class Ctx[+Context, +Value](context: Context, value: Value, telemetryContext: TelemetryContext = NoOpTelemetryContext) extends Product with Serializable

    Ctx wraps a value with some contextual information.

  16. sealed trait FailureMode extends AnyRef

    Determines how the queue reacts to failures of previous tasks.

  17. class FlushFuture extends HasFlushFuture

    Stand-alone implementation of HasFlushFuture

  18. trait FromByteString[T] extends AnyRef
  19. trait HasFlushFuture extends NamedLogging

    Provides a single flush scala.concurrent.Future that runs asynchronously.

    Provides a single flush scala.concurrent.Future that runs asynchronously. Tasks can be chained onto the flush future, although they will not run sequentially.

  20. final class LazyValWithContext[T, Context] extends AnyRef

    "Implements" a lazy val field whose initialization expression can refer to implicit context information of type Context.

    "Implements" a lazy val field whose initialization expression can refer to implicit context information of type Context. The "val" is initialized upon the first call to get, using the context information supplied for this call, like a lazy val.

    Instead of a plain lazy val field without context

    class C { lazy val f: T = initializer
    }
    
    use the following code to pass in a Context:
     class C { private[this] val _f:
    LazyValWithContext[T, Context] = new LazyValWithContext[T, Context](context => initializer) def
    f(implicit context: Context): T = _f.get } 
    

    This class implements the same scheme as how the Scala 2.13 compiler implements lazy vals, as explained on https://docs.scala-lang.org/sips/improved-lazy-val-initialization.html (version V1) along with its caveats.

    See also

    TracedLazyVal To be used when the initializer wants to log something using the logger of the surrounding class

    ErrorLoggingLazyVal To be used when the initializer wants to log errors using the logger of the caller

  21. trait LazyValWithContextCompanion[Context] extends AnyRef
  22. sealed trait LengthLimitedByteString extends AnyRef

    This trait wraps a ByteString that is limited to a certain maximum length.

    This trait wraps a ByteString that is limited to a certain maximum length. Classes implementing this trait expose create and tryCreate methods to safely (and non-safely) construct such a ByteString.

    The canonical use case is ensuring that we don't encrypt more data than the underlying crypto algorithm can: for example, Rsa2048OaepSha256 can only encrypt 190 bytes at a time.

  23. trait LengthLimitedByteStringCompanion[A <: LengthLimitedByteString] extends AnyRef

    Trait that implements method commonly needed in the companion object of an LengthLimitedByteString

  24. class MessageRecorder extends FlagCloseable with NamedLogging

    Persists data for replay tests.

  25. type NamedLoggingLazyVal[T] = LazyValWithContext[T, NamedLoggingContext]
  26. trait NoCopy extends AnyRef

    Prevents auto-generation of the copy method in a case class.

    Prevents auto-generation of the copy method in a case class. Case classes with private constructors typically shouldn't have a copy method.

  27. class NoOpBatchAggregator[A, B] extends BatchAggregator[A, B]
  28. class NoOpBatchAggregatorUS[A, B] extends BatchAggregatorUS[A, B]
  29. final case class OrderedBucketMergeConfig[Name, +Config](threshold: PositiveInt, sources: NonEmpty[Map[Name, Config]]) extends Product with Serializable

    threshold

    The threshold of equivalent elements to reach before it can be emitted.

    sources

    The configurations to be used with OrderedBucketMergeHubOps.makeSource to create a source.

  30. class OrderedBucketMergeHub[Name, A, Config, Offset, M] extends GraphStageWithMaterializedValue[FlowShape[OrderedBucketMergeConfig[Name, Config], Output[Name, (Config, Option[M]), A, Offset]], Future[Done]] with NamedLogging

    A custom Pekko org.apache.pekko.stream.stage.GraphStage that merges several ordered source streams into one based on those sources reaching a threshold for equivalent elements.

    A custom Pekko org.apache.pekko.stream.stage.GraphStage that merges several ordered source streams into one based on those sources reaching a threshold for equivalent elements.

    The ordered sources produce elements with totally ordered offsets. For a given threshold t, whenever t different sources have produced equivalent elements for an offset that is higher than the previous offset, the OrderedBucketMergeHub emits the map of all these equivalent elements as the next com.digitalasset.canton.util.OrderedBucketMergeHub.OutputElement to downstream. Elements from the other ordered sources with lower or equal offset that have not yet reached the threshold are dropped.

    Every correct ordered source should produce the same sequence of offsets. Faulty sources can produce any sequence of elements as they like. The threshold should be set to F+1 where at most F sources are assumed to be faulty, and at least 2F+1 ordered sources should be configured. This ensures that the F faulty ordered sources cannot corrupt the stream nor block it.

    If this assumption is violated, the OrderedBucketMergeHub may deadlock, as it only looks at the next element of each ordered source (this avoids unbounded buffering and therefore ensures that downstream backpressure reaches the ordered sources). For example, given a threshold of 2 with three ordered sources, two of which are faulty, the first elements of the sources have offsets 1, 2, 3. Suppose that the first ordered source's second element had offset 3 and is equivalent to the third ordered source's first element. Then, by the above definition of merging, the stage could emit the elements with offset 3 and discard those with 1 and 2. However, this is not yet implemented; the stream just does not emit anything. Neither are such deadlocks detected right now. This is because in an asynchronous system, there typically are ordered sources that have not yet delivered their next element, and possibly may never will within useful time, say because they have crashed (which is not considered a fault). In the above example, suppose that the second ordered source had not emitted the element with offset 2. Then it is unknown whether the element with offset 1 should be emitted or not, because we do not know which ordered sources are correct. Suppose we had decided that we drop the elements with offset 1 from a correct ordered source and emit the ones with offset 3 instead, Then the second (delayed, but correct) ordered source can still send an equivalent element with 1, and so the decision of dropping 1 was wrong in hindsight.

    The OrderedBucketMergeHub manages the ordered sources. Their configurations and the threshold are coming through the OrderedBucketMergeHub's input stream as a OrderedBucketMergeConfig. As soon as a new OrderedBucketMergeConfig is available, the OrderedBucketMergeHub changes the ordered sources as necessary:

    • Ordered sources are identified by their Name.
    • Existing ordered sources whose name does not appear in the new configuration are stopped.
    • If a new configuration contains a new name for an ordered source, a new ordered source is created using ops.
    • If the configuration of an ordered source changes, the previous source is stopped and a new one with the new configuration is created.

    The OrderedBucketMergeHub emits com.digitalasset.canton.util.OrderedBucketMergeHub.ControlOutput events to downstream:

    Since configuration changes are consumed eagerly, the OrderedBucketMergeHub buffers these com.digitalasset.canton.util.OrderedBucketMergeHub.ControlOutput events if downstream is not consuming them fast enough. The stream of configuration changes should therefore be slower than downstream; otherwise, the buffer will grow unboundedly and lead to java.lang.OutOfMemoryErrors eventually.

    When the configuration stream completes or aborts, all ordered sources are stopped and the output stream completes.

    An ordered source is stopped by pulling its org.apache.pekko.stream.KillSwitch and dropping all elements until the source completes or aborts. In particular, the ordered source is not just simply cancelled upon a configuration change or when the configuration stream completes. This allows for properly synchronizing the completion of the OrderedBucketMergeHub with the internal computations happening in the ordered sources. To that end, the OrderedBucketMergeHub materializes to a scala.concurrent.Future that completes when the corresponding futures from all created ordered sources have completed as well as the ordered sources themselves.

    If downstream cancels, the OrderedBucketMergeHub cancels all sources and the input port, without draining them. Therefore, the materialized scala.concurrent.Future may or may not complete, depending on the shape of the ordered sources. For example, if the ordered sources' futures are created with a plain org.apache.pekko.stream.scaladsl.FlowOpsMat.watchTermination, it will complete because org.apache.pekko.stream.scaladsl.FlowOpsMat.watchTermination completes immediately when it sees a cancellation. Therefore, it is better to avoid downstream cancellations altogether.

    Rationale for the merging logic:

    This graph stage is meant to merge the streams of sequenced events from several sequencers on a client node. The operator configures N sequencer connections and specifies a threshold T. Suppose the operator assumes that at most F nodes out of N are faulty. So we need F < T for safety. For liveness, the operator wants to tolerate as many crashes of correct sequencer nodes as feasible. Let C be the number of tolerated crashes. Then T <= N - C - F because faulty sequencers may not deliver any messages. For a fixed F, T = F + 1 is optimal as we can then tolerate C = N - 2F - 1 crashed sequencer nodes.

    In other words, if the operator wants to tolerate up to F faults and up to C crashes, then it should set T = F + 1 and configure N = 2F + C + 1 different sequencer connections.

    If more than C sequencers have crashed, then the faulty sequencers can make the client deadlock. The client cannot detect this under the asynchrony assumption.

    Moreover, the client cannot distinguish either between whether a sequencer node is actively malicious or just accidentally faulty. In particular, if several sequencer nodes deliver inequivalent events, we currently silently drop them. TODO(#14365) Design and implement an alert mechanism

  31. trait OrderedBucketMergeHubOps[Name, A, Config, Offset, +M] extends AnyRef
  32. class RateLimiter extends AnyRef

    Utility class that allows clients to keep track of a rate limit.

    Utility class that allows clients to keep track of a rate limit.

    The decay rate limiter keeps track of the current rate, allowing temporary bursts. This allows temporary bursts at the risk of overloading the system too quickly.

    Clients need to tell an instance whenever they intend to start a new task. The instance will inform the client whether the task can be executed while still meeting the rate limit.

    Guarantees:

    • Maximum burst size: if checkAndUpdateRate is called n times in parallel, at most max 1, maxTasksPerSecond * maxBurstFactor calls may return true.
    • Average rate: if checkAndUpdateRate is called at a rate of at least maxTasksPerSecond during n seconds, then the number of calls that return true divided by n is roughly maxTasksPerSecond .
  33. sealed trait ReassignmentTag[+T] extends Product with Serializable

    In reassignment transactions, we deal with two synchronizers: the source synchronizer and the target synchronizer.

    In reassignment transactions, we deal with two synchronizers: the source synchronizer and the target synchronizer. The Source and Target wrappers help differentiate between these two synchronizers, allowing us to manage their specific characteristics, such as protocol versions, static synchronizer parameters, and other synchronizer-specific details.

  34. trait SameReassignmentType[T[_]] extends AnyRef

    A type class that ensures the reassignment type remains consistent across multiple parameters of a method.

    A type class that ensures the reassignment type remains consistent across multiple parameters of a method. This is useful when dealing with types that represent different reassignment contexts (e.g., Source and Target), and we want to enforce that all parameters share the same reassignment context.

    Example:

    def f[F[_] <: ReassignmentTag[_]: SameReassignmentType](i: F[Int], s: F[String]) = ???
    
    // f(Source(1), Target("One"))  // This will not compile, as `Source` and `Target` are different reassignment types.
    // f(Source(1), Source("One"))  // This will compile, as both parameters are of the same reassignment type `Source`.
  35. trait ShowUtil extends ShowSyntax
  36. class SimpleExecutionQueue extends PrettyPrinting with NamedLogging with FlagCloseableAsync

    Functions executed with this class will only run when all previous calls have completed executing.

    Functions executed with this class will only run when all previous calls have completed executing. This can be used when async code should not be run concurrently.

    The default semantics is that a task is only executed if the previous tasks have completed successfully, i.e., they did not fail nor was the task aborted due to shutdown.

    If the queue is shutdown, the tasks' execution is aborted due to shutdown too.

  37. class SingleUseCell[A] extends AnyRef

    This class provides a mutable container for a single value of type A.

    This class provides a mutable container for a single value of type A. The value may be put at most once. A SingleUseCell therefore provides the following immutability guarantee: The value of a cell cannot change; once it has been put there, it will remain in the cell.

  38. trait SingletonTraverse[F[_]] extends Traverse[F]

    cats.Traverse for containers with at most one element.

  39. class SnapshottableList[A] extends AnyRef

    A mutable list to which elements can be prepended and where snapshots can be taken atomically.

    A mutable list to which elements can be prepended and where snapshots can be taken atomically. Both operations are constant-time. Thread safe.

  40. class StampedLockWithHandle extends AnyRef

    A stamped lock that allows passing around a lock handle to better guard methods that should only be called with an active lock.

    A stamped lock that allows passing around a lock handle to better guard methods that should only be called with an active lock.

    For example:

    object Foo {
      val lock = new StampedLockWithHandle()
    
      def bar() = lockWithWriteLockHandle { implicit writeLock =>
        // do something
        baz()
        // do more stuff
      }
    
      def baz()(implicit writeLockHandle: lock.WriteLockHandle) = {
        // do some more stuff
      }
    }

    In the above example, baz cannot be called unless a write lock was acquired specifically only with lock.

  41. trait Thereafter[F[_]] extends AnyRef

    Typeclass for computations with an operation that can run a side effect after the computation has finished.

    Typeclass for computations with an operation that can run a side effect after the computation has finished.

    The typeclass abstracts the following patterns so that it can be used for types other than scala.concurrent.Future.

    future.transform { result => val () = body(result); result } // synchronous body
    future.transformWith { result => body(result).transform(_ => result) } // asynchronous body

    Usage:

    import com.digitalasset.canton.util.Thereafter.syntax.*
    
    myAsyncComputation.thereafter(result => ...) // synchronous body
    myAsyncComputation.thereafterF(result => ...) // asynchronous body

    It is preferred to similar functions such as scala.concurrent.Future.andThen because it properly chains exceptions from the side-effecting computation back into the original computation.

    F

    The computation's type functor.

  42. trait ThereafterAsync[F[_]] extends Thereafter[F]

    Extension of Thereafter that adds the possibility to run an asynchronous piece of code afterwards with proper synchronization and exception propagation.

  43. type TracedLazyVal[T] = LazyValWithContext[T, TraceContext]
  44. class TwoPhasePriorityAccumulator[A, B] extends AnyRef

    A container with two phases for items with priorities:

    A container with two phases for items with priorities:

    1. In the accumulation phase, items can be added with a priority via TwoPhasePriorityAccumulator.accumulate.
    2. In the draining phase, items can be removed in priority order via TwoPhasePriorityAccumulator.drain. The order of items with equal priority is unspecified.

    TwoPhasePriorityAccumulator.stopAccumulating switches from the accumulation phase to the draining phase. Items can be removed from the container via the handle returned by TwoPhasePriorityAccumulator.accumulate.

    A

    The type of items to accumulate

    B

    The type of labels for the draining phase

  45. final case class UByte(signed: Byte) extends NoCopy with Product with Serializable
  46. trait WithGeneric[+A, B, C[+_]] extends AnyRef

    Generic implementation for creating a container of single As paired with a value of type B with appropriate map and traverse implementations.

  47. trait WithGenericCompanion extends AnyRef

Value Members

  1. val NamedLoggingLazyVal: LazyValWithContextCompanion[NamedLoggingContext]
  2. val TracedLazyVal: LazyValWithContextCompanion[TraceContext]
  3. object BatchAggregator
  4. object BatchAggregatorUS
  5. object BatchN

    Forms dynamically-sized batches based on downstream backpressure.

    Forms dynamically-sized batches based on downstream backpressure.

    • Under light load, this flow emits batches of size 1.
    • Under moderate load, this flow emits batches according to the batch mode:
      • MaximizeConcurrency: emits batches of even sizes
      • MaximizeBatchSize: emits fewer but full batches
    • Under heavy load (dowstream saturated), this flow emits batches of maxBatchSize.

    moderate load: short intermittent backpressure from downstream that doesn't fill up the maximum batch capacity (maxBatchSize * maxBatchCount) of BatchN.

    heavy load: downstream backpressure causes the full batch capacity to fill up and BatchN to exert backpressure to upstream.

    Under heavy load or when maxBatchCount == 1, CatchUpMode.MaximizeBatchSize and CatchupMode.MaximizeConcurrency behave the same way, i.e. full batches are emitted.

  6. object BinaryFileUtil

    Write and read byte strings to files.

  7. object BooleanUtil
  8. object ByteString190 extends LengthLimitedByteStringCompanion[ByteString190] with Serializable
  9. object ByteString256 extends LengthLimitedByteStringCompanion[ByteString256] with Serializable
  10. object ByteString4096 extends LengthLimitedByteStringCompanion[ByteString4096] with Serializable
  11. object ByteString6144 extends LengthLimitedByteStringCompanion[ByteString6144] with Serializable
  12. object ByteStringUtil
  13. object BytesUnit extends Serializable
  14. object ChainUtil

    Provides utility functions for the cats implementation of a Chain.

    Provides utility functions for the cats implementation of a Chain. This is a data-structure similar to a List, with constant time prepend and append. Note that the Chain has a performance hit when pattern matching as there is no constant-time uncons operation.

    Documentation on the cats Chain: https://typelevel.org/cats/datatypes/chain.html.

  15. object Checked extends Serializable
  16. object CheckedT extends CheckedTInstances with Serializable
  17. object ContinueAfterFailure extends FailureMode

    The queue will continue the execution of tasks even if previous tasks had failed.

  18. object CrashAfterFailure extends FailureMode

    Causes the queue to crash the entire process if a task is scheduled after a previously failed task.

  19. object Ctx extends Serializable
  20. object DamlPackageLoader

    Wrapper that retrieves parsed packages from a DAR file consumable by the Daml interpreter.

  21. object DelayUtil extends NamedLogging

    Utility to create futures that succeed after a given delay.

    Utility to create futures that succeed after a given delay.

    Inspired by the odelay library, but with a restricted interface to avoid hazardous effects that could be caused by the use of a global executor service.

    TODO(i4245): Replace all usages by Clock.

  22. object EitherTUtil

    Utility functions for the cats cats.data.EitherT monad transformer.

    Utility functions for the cats cats.data.EitherT monad transformer. https://typelevel.org/cats/datatypes/eithert.html

  23. object EitherUtil
  24. object ErrorUtil
  25. object FutureInstances
  26. object FutureUnlessShutdownUtil
  27. object FutureUtil
  28. object GrpcStreamingUtils
  29. object HasFlushFuture
  30. object HexString

    Conversion functions to and from hex strings.

  31. object IdUtil

    Contains instances for the Id functor.

  32. object IterableUtil
  33. object JarResourceUtils

    Utility methods for loading resource test files.

  34. object LfTransactionUtil

    Helper functions to work with com.digitalasset.daml.lf.transaction.GenTransaction.

    Helper functions to work with com.digitalasset.daml.lf.transaction.GenTransaction. Using these helper functions is useful to provide a buffer from upstream changes.

  35. object LoggerUtil
  36. object MapsUtil
  37. object MessageRecorder
  38. object MonadUtil
  39. object OptionUtil
  40. object OptionUtils
  41. object OrderedBucketMergeHub
  42. object OrderedBucketMergeHubOps
  43. object PathUtils
  44. object PekkoUtil extends HasLoggerName
  45. object PriorityBlockingQueueUtil
  46. object RangeUtil
  47. object ReassignmentTag extends Serializable
  48. object ResourceUtil

    Utility code for doing proper resource management.

    Utility code for doing proper resource management. A lot of it is based on https://medium.com/@dkomanov/scala-try-with-resources-735baad0fd7d

  49. object SeqUtil
  50. object SetCover
  51. object SetsUtil
  52. object ShowUtil extends ShowUtil

    Utility class for clients who want to make use of pretty printing.

    Utility class for clients who want to make use of pretty printing. Import this as follows:

    import com.digitalasset.canton.util.ShowUtil.*

    In some cases, an import at the top of the file will not make the show interpolator available. To work around this, you need to import this INSIDE of the using class.

    To enforce pretty printing, the show interpolator should be used for creating strings. That is, show"s$myComplexObject" will result in a compile error, if pretty printing is not implemented for myComplexObject. In contrast, s"$myComplexObject" will fall back to the default (non-pretty) toString implementation, if pretty printing is not implemented for myComplexObject. Even if pretty printing is implemented for the type T of myComplexObject, s"$myComplexObject" will not use it, if the compiler fails to infer T: Pretty.

  53. object SimpleExecutionQueue
  54. object SingletonTraverse extends Serializable
  55. object SnapshottableList
  56. object StackTraceUtil
  57. object StopAfterFailure extends FailureMode

    Causes the queue to not process any further tasks after a previously failed task.

  58. object TextFileUtil
  59. object Thereafter
  60. object ThereafterAsync
  61. object TrieMapUtil
  62. object TryUtil
  63. object TwoPhasePriorityAccumulator
  64. object UByte extends Serializable
  65. object VersionUtil

Inherited from AnyRef

Inherited from Any

Ungrouped