Packages

object PekkoUtil extends HasLoggerName

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. PekkoUtil
  2. HasLoggerName
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Type Members

  1. class CombinedKillSwitch extends KillSwitch

    Combines two kill switches into one

  2. trait Commit extends AnyRef
  3. trait CompletingAndShutdownable extends AnyRef
  4. sealed trait Consumer[+T] extends AnyRef
  5. type ContextualizedFlow[+Context[+_], -A, +B, +Mat] = PekkoUtil.ContextualizedFlowOpsImpl.ContextualizedFlow[Context, A, B, Mat]
  6. type ContextualizedFlowOps[+Context[+_], +A, +Mat] = PekkoUtil.ContextualizedFlowOpsImpl.ContextualizedFlowOps[Context, A, Mat]
  7. sealed abstract class ContextualizedFlowOpsImpl extends AnyRef
  8. type ContextualizedSource[+Context[+_], +A, +Mat] = PekkoUtil.ContextualizedFlowOpsImpl.ContextualizedSource[Context, A, Mat]
  9. class DelayedKillSwitch extends KillSwitch

    Delegates to a future org.apache.pekko.stream.KillSwitch once the kill switch becomes available.

    Delegates to a future org.apache.pekko.stream.KillSwitch once the kill switch becomes available. If both com.digitalasset.canton.util.PekkoUtil.DelayedKillSwitch.shutdown and com.digitalasset.canton.util.PekkoUtil.DelayedKillSwitch.abort are called or com.digitalasset.canton.util.PekkoUtil.DelayedKillSwitch.abort is called multiple times before the delegate is available, then the winning call is non-deterministic.

  10. trait FutureQueue[T] extends CompletingAndShutdownable
  11. final case class FutureQueueConsumer[T](futureQueue: FutureQueue[(Long, T)], fromExclusive: Long) extends Product with Serializable
  12. class FutureQueuePullProxy[+T] extends CompletingAndShutdownable
    Annotations
    @SuppressWarnings()
  13. class IndexingFutureQueue[T] extends FutureQueue[T]
    Annotations
    @SuppressWarnings()
  14. class KillSwitchFlagCloseable extends KillSwitch

    A KillSwitch that calls FlagCloseable.close on shutdown or abort.

  15. class LoggingInHandler extends InHandler

    Pekko by default swallows exceptions thrown in org.apache.pekko.stream.stage.InHandlers.

    Pekko by default swallows exceptions thrown in org.apache.pekko.stream.stage.InHandlers. This wrapper makes sure that they are logged.

  16. class LoggingOutHandler extends OutHandler

    Pekko by default swallows exceptions thrown in org.apache.pekko.stream.stage.OutHandlers.

    Pekko by default swallows exceptions thrown in org.apache.pekko.stream.stage.OutHandlers. This wrapper makes sure that they are logged.

  17. class PekkoSourceQueueToFutureQueue[T] extends FutureQueue[T]
  18. trait RecoveringFutureQueue[T] extends FutureQueue[T]

    RecoveringFutureQueue governs the life cycle of a FutureQueue, which is created asynchronously, can be initialized and started again after a failure and operates on elements that define a monotonically increasing index.

    RecoveringFutureQueue governs the life cycle of a FutureQueue, which is created asynchronously, can be initialized and started again after a failure and operates on elements that define a monotonically increasing index. As part of the recovery process, the implementation keeps track of already offered elements, and based on the provided fromExclusive index, replays the missing elements. The FutureQueue needs to make sure with help of the Commit, that up to the INDEX, the elements are fully processed. This needs to be done as soon as possible, because this allows to "forget" about the offered elements in the RecoveringFutureQueue implementation.

  19. class RecoveringFutureQueueImpl[T] extends RecoveringFutureQueue[T]
    Annotations
    @SuppressWarnings()
  20. class RecoveringQueue[T] extends AnyRef
    Annotations
    @SuppressWarnings()
  21. trait RecoveringQueueMetrics extends AnyRef
  22. trait RetrySourcePolicy[S, -A] extends AnyRef

    Defines the policy when restartSource should restart the source, and the state from which the source should be restarted from.

  23. final case class WithKillSwitch[+A](value: A)(killSwitch: KillSwitch) extends WithGeneric[A, KillSwitch, WithKillSwitch] with Product with Serializable

    Container class for adding a org.apache.pekko.stream.KillSwitch to a single value.

    Container class for adding a org.apache.pekko.stream.KillSwitch to a single value. Two containers are equal if their contained values are equal.

    (Equality ignores the org.apache.pekko.stream.KillSwitches because it is usually not very meaningful. The org.apache.pekko.stream.KillSwitch is therefore in the second argument list.)

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
  6. def createActorSystem(namePrefix: String)(implicit ec: ExecutionContext): ActorSystem

    Create an Actor system using the existing execution context ec

  7. def createExecutionSequencerFactory(namePrefix: String, logger: Logger)(implicit actorSystem: ActorSystem): ExecutionSequencerFactory

    Create a new execution sequencer factory (mainly used to create a ledger client) with the existing actor system actorSystem

  8. def dropIf[A, Mat](graph: FlowOps[A, Mat], count: Int, condition: (A) => Boolean): Repr[A]

    Drops the first count many elements from the graph that satisfy the condition.

    Drops the first count many elements from the graph that satisfy the condition. Keeps all elements that do not satisfy the condition.

  9. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  10. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  11. implicit def errorLoggingContextFromNamedLoggingContext(implicit loggingContext: NamedLoggingContext): ErrorLoggingContext

    Convert a com.digitalasset.canton.logging.NamedLoggingContext into an com.digitalasset.canton.logging.ErrorLoggingContext to fix the logger name to the current class name.

    Convert a com.digitalasset.canton.logging.NamedLoggingContext into an com.digitalasset.canton.logging.ErrorLoggingContext to fix the logger name to the current class name.

    Attributes
    protected
    Definition Classes
    HasLoggerName
  12. def exponentialRetryWithCap(minWait: Long, multiplier: Int, cap: Long): (Int) => Long
  13. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  14. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  15. def injectKillSwitch[A, Mat](graph: FlowOpsMat[A, Mat])(killSwitch: (Mat) => KillSwitch): ReprMat[WithKillSwitch[A], Mat]
  16. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  17. implicit def loggerNameFromThisClass: LoggerNameFromClass
    Attributes
    protected
    Definition Classes
    HasLoggerName
  18. def loggingAsyncCallback[A](logger: Logger, name: String)(asyncCallback: (A) => Unit): (A) => Unit

    Pekko by default swallows exceptions thrown in async callbacks.

    Pekko by default swallows exceptions thrown in async callbacks. This wrapper makes sure that they are logged.

  19. def mapAsyncAndDrainUS[A, Mat, B](graph: FlowOps[A, Mat], parallelism: Int)(f: (A) => FutureUnlessShutdown[B])(implicit loggingContext: NamedLoggingContext): Repr[B]

    Version of mapAsyncUS that discards the com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdowns.

    Version of mapAsyncUS that discards the com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdowns.

    Completes when upstream completes and all futures have been completed and all elements have been emitted.

  20. def mapAsyncUS[A, Mat, B](graph: FlowOps[A, Mat], parallelism: Int)(f: (A) => FutureUnlessShutdown[B])(implicit loggingContext: NamedLoggingContext): Repr[UnlessShutdown[B]]

    Version of org.apache.pekko.stream.scaladsl.FlowOps.mapAsync for a com.digitalasset.canton.lifecycle.FutureUnlessShutdown.

    Version of org.apache.pekko.stream.scaladsl.FlowOps.mapAsync for a com.digitalasset.canton.lifecycle.FutureUnlessShutdown. If f returns com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdown on one element of source, then the returned source returns com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdown for all subsequent elements as well.

    If parallelism is one, ensures that f is called sequentially for each element of source and that f is not invoked on later stream elements if f returns com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdown for an earlier element. If parellelism is greater than one, f may be invoked on later stream elements even though an earlier invocation results in f returning com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdown.

    Emits when the Future returned by the provided function finishes for the next element in sequence

    Backpressures when the number of futures reaches the configured parallelism and the downstream backpressures or the first future is not completed

    Completes when upstream completes and all futures have been completed and all elements have been emitted, including those for which the future did not run due to earlier com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdowns.

    Cancels when downstream cancels

    parallelism

    The parallelism level. Must be at least 1.

    Exceptions thrown

    java.lang.IllegalArgumentException if parallelism is not positive.

  21. def mapAsyncUnorderedAndDrainUS[A, Mat, B](graph: FlowOps[A, Mat], parallelism: Int)(f: (A) => FutureUnlessShutdown[B])(implicit loggingContext: NamedLoggingContext): Repr[B]

    Same as mapAsyncAndDrainUS except it uses mapAsyncUnordered as the underlying async method.

    Same as mapAsyncAndDrainUS except it uses mapAsyncUnordered as the underlying async method. Therefore the elements emitted may come out of order. See org.apache.pekko.stream.scaladsl.Flow.mapAsyncUnordered for more details.

  22. def mapAsyncUnorderedUS[A, Mat, B](graph: FlowOps[A, Mat], parallelism: Int)(f: (A) => FutureUnlessShutdown[B])(implicit loggingContext: NamedLoggingContext): Repr[UnlessShutdown[B]]

    Same as mapAsyncUS except it uses mapAsyncUnordered as the underlying async method.

    Same as mapAsyncUS except it uses mapAsyncUnordered as the underlying async method. Therefore the elements emitted may come out of order. See org.apache.pekko.stream.scaladsl.Flow.mapAsyncUnordered for more details.

  23. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  24. val noOpKillSwitch: KillSwitch
  25. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  26. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  27. def remember[A, Mat](graph: FlowOps[A, Mat], memory: NonNegativeInt): Repr[NonEmpty[Seq[A]]]

    Remembers the last memory many elements that have already been emitted previously.

    Remembers the last memory many elements that have already been emitted previously. Passes those remembered elements downstream with each new element. The current element is the com.daml.nonempty.NonEmptyCollInstances.NEPreservingOps.last1 of the sequence.

    remember differs from org.apache.pekko.stream.scaladsl.FlowOps.sliding in that remember emits elements immediately when the given source emits, whereas org.apache.pekko.stream.scaladsl.FlowOps.sliding only after the source has emitted enough elements to fill the window.

  28. def restartSource[S, A](name: String, initial: S, mkSource: (S) => Source[A, (KillSwitch, Future[Done])], policy: RetrySourcePolicy[S, A])(implicit arg0: Pretty[S], loggingContext: NamedLoggingContext, materializer: Materializer): Source[WithKillSwitch[A], (KillSwitch, Future[Done])]

    Creates a source from mkSource from the initial state.

    Creates a source from mkSource from the initial state. Whenever this source terminates, policy determines whether another source shall be constructed (after a given delay) from a possibly new state. The returned source concatenates the output of all the constructed sources in order. At most one constructed source is active at any given point in time.

    Failures in the constructed sources are passed to the policy, but do not make it downstream. The policy is responsible for properly logging these errors if necessary.

    returns

    The concatenation of all constructed sources. This source is NOT a blueprint and MUST therefore be materialized at most once. Its materialized value provides a kill switch to stop retrying. Only the org.apache.pekko.stream.KillSwitch.shutdown method should be used; The switch does not short-circuit the already constructed sources though. synchronization may not work correctly with org.apache.pekko.stream.KillSwitch.abort. Downstream should not cancel; use the kill switch instead. The materialized scala.concurrent.Future can be used to synchronize on the computations for restarts: if the source is stopped with the kill switch, the future completes after the computations have finished.

  29. def runSupervised[MaterializedValueT](graph: RunnableGraph[MaterializedValueT], errorLogMessagePrefix: String, isDone: (MaterializedValueT) => Boolean = (_: MaterializedValueT) => false, debugLogging: Boolean = false)(implicit mat: Materializer, loggingContext: ErrorLoggingContext): MaterializedValueT

    Utility function to run the graph supervised and stop on an unhandled exception.

    Utility function to run the graph supervised and stop on an unhandled exception.

    By default, an Pekko flow will discard exceptions. Use this method to avoid discarding exceptions.

  30. def sinkIgnoreFUS[T](implicit ec: ExecutionContext): Sink[T, FutureUnlessShutdown[Done]]

    Custom Sink.ignore that materializes into FutureUnlessShutdown

  31. def statefulMapAsync[Out, Mat, S, T](graph: FlowOps[Out, Mat], initial: S)(f: (S, Out) => Future[(S, T)])(implicit loggingContext: NamedLoggingContext): Repr[T]

    A version of org.apache.pekko.stream.scaladsl.FlowOps.mapAsync that additionally allows to pass state of type S between every subsequent element.

    A version of org.apache.pekko.stream.scaladsl.FlowOps.mapAsync that additionally allows to pass state of type S between every subsequent element. Unlike org.apache.pekko.stream.scaladsl.FlowOps.statefulMapConcat, the state is passed explicitly. Must not be run with supervision strategies org.apache.pekko.stream.Supervision.Restart nor org.apache.pekko.stream.Supervision.Resume

  32. def statefulMapAsyncContextualizedUS[Out, Mat, S, T, Context[_], C](graph: FlowOps[Context[Out], Mat], initial: S)(f: (S, C, Out) => FutureUnlessShutdown[(S, T)])(implicit loggingContext: NamedLoggingContext, Context: Aux[Context, C]): Repr[Context[UnlessShutdown[T]]]

    Lifts statefulMapAsyncUS over a context.

  33. def statefulMapAsyncUS[Out, Mat, S, T](graph: FlowOps[Out, Mat], initial: S)(f: (S, Out) => FutureUnlessShutdown[(S, T)])(implicit loggingContext: NamedLoggingContext): Repr[UnlessShutdown[T]]

    Combines mapAsyncUS with statefulMapAsync.

  34. def statefulMapAsyncUSAndDrain[Out, Mat, S, T](graph: FlowOps[Out, Mat], initial: S)(f: (S, Out) => FutureUnlessShutdown[(S, T)])(implicit loggingContext: NamedLoggingContext): Repr[T]
  35. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  36. def takeUntilThenDrain[A, Mat](graph: FlowOps[WithKillSwitch[A], Mat], condition: (A) => Boolean): Repr[WithKillSwitch[A]]

    Passes through all elements of the source until and including the first element that satisfies the condition.

    Passes through all elements of the source until and including the first element that satisfies the condition. Thereafter pulls the kill switch of the first such element and drops all remaining elements of the source.

    Emits when upstream emits and all previously emitted elements do not meet the condition.

    Backpressures when downstream backpressures

    Completes when upstream completes

    Cancels when downstream cancels

  37. def toString(): String
    Definition Classes
    AnyRef → Any
  38. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  39. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  40. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  41. def withUniqueKillSwitch[A, Mat, Mat2](graph: FlowOpsMat[A, Mat])(mat: (Mat, UniqueKillSwitch) => Mat2): ReprMat[WithKillSwitch[A], Mat2]

    Adds a org.apache.pekko.stream.KillSwitches.single into the stream after the given source and injects the created kill switch into the stream

  42. object ContextualizedFlowOps
  43. object ContextualizedFlowOpsImpl
  44. object LoggingInHandler
  45. object LoggingOutHandler
  46. object RecoveringQueueMetrics
  47. object RetrySourcePolicy
  48. object WithKillSwitch extends WithGenericCompanion with Serializable
  49. object syntax

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

    (Since version 9)

Inherited from HasLoggerName

Inherited from AnyRef

Inherited from Any

Ungrouped