Akka Streams: Designing a full project

In the previous post I talked about how we use Akka Streams as a replacement for the Spring Integration behavior. Now I’ll show you how we solve some problems and remove boilerplate code by extracting common design patterns. In order to start, I created a demonstration project that you can find on Github.

The first thing we did was to create an utils object extracting the code to create all the common Partial Flow Graphs. And we got the object PartialFlowGraphUtils.


We said before that every time we have a Filter we need a Broadcast of two outputs (in our case, because we are replacing the behavior of the Spring Integration Filter that has a Discard Channel), so we defined a method to create the FanOutShape as a Partial Flow Graph:

def filterPartialFlow(filterFunction: FlowMessage => Boolean) = FlowGraph.partial() { implicit b =>
  val bcast = b.add(Broadcast[FlowMessage](2))
  val filter = b.add(Flow[FlowMessage] filter (filterFunction(_)))
  val notFilter = b.add(Flow[FlowMessage] filter (!filterFunction(_)))

  bcast ~> filter
  bcast ~> notFilter

  UniformFanOutShape(bcast.in, filter.outlet, notFilter.outlet)

Also we defined a function to create a FlowShape:

def partialFlow(function: FlowMessage => FlowMessage) = Flow[FlowMessage] map (function(_))

And finally a FlowShape with a function that adds Headers to the FlowMessage:

def partialFlowWithHeader(header: MessageHeader) = partialFlow(fm => addHeader(fm, header))

def addHeader(message: FlowMessage, header: MessageHeader): FlowMessage = {
  val headers = message.headers + (header.key, header.value)
  message.copy(headers, message.event)


Once we had the Utils object we created a Trait for every partial flow. This is how it looks the EventInputFlow:

trait EventInputFlow {

  this: EventTypeFilteredFlow =>

  lazy val eventInputFlow = FlowGraph.partial() { implicit b =>
    val headersProcess = b.add(partialFlowWithHeader(MessageHeader("starting", System.currentTimeMillis())))

    val eventTypeFilterFlow = b.add(filterPartialFlowGraph(_.event.`type` == "TENNIS"))
    val headersFilterFlow = b.add(filterPartialFlowGraph(_.headers.contains("MatchSession")))
    val eventTypeFiltered = b.add(eventTypeFilteredFlow)

    headersProcess ~> eventTypeFilterFlow
                      eventTypeFilterFlow.out(0) ~> headersFilterFlow
                      eventTypeFilterFlow.out(1) ~> eventTypeFiltered

    UniformFanOutShape(headersProcess.inlet, headersFilterFlow.out(0), headersFilterFlow.out(1), eventTypeFiltered.outlet)


Can you see the differences regarding the last definition of this flow? Clearly the code is much better now.

The others flows have a similar design to the EventInputFlow (just take a look at the project). And as you can see here, the EventInputFlow depends on EventTypeFilteredFlow (indicated by the self type reference this: Type =>), so we need to provide it. It’s kinda a Thin Cake Pattern. The responsible for the dependency injection that put all the pieces together is the EventPipelineFlow:

trait EventPipelineFlow extends EventInputFlow
                        with HeadersValidationFlow
                        with EventTypeFilteredFlow
                        with EventProcessorFlow {

  lazy val eventPipelineFlow = FlowGraph.partial() { implicit b =>
    val pipeline = b.add(partialEventPipeline)
    pipeline.out(1) ~> Sink.ignore
    pipeline.out(2) ~> Sink.ignore

    FlowShape(pipeline.in, pipeline.out(0))

  lazy val partialEventPipeline = FlowGraph.partial() { implicit b =>
    val eventInput = b.add(eventInputFlow)
    val headersValidation = b.add(headersValidationFlow)
    val processorMerge = b.add(Merge[FlowMessage](2))
    val eventProcessor = b.add(eventProcessorFlow)

    eventInput.out(0) ~> processorMerge
    eventInput.out(1) ~> headersValidation ~> processorMerge
    processorMerge ~> eventProcessor

    UniformFanOutShape(eventInput.in, eventProcessor.out(0), eventInput.out(2), eventProcessor.out(1))



Now that we defined all the flows we need to connect a Source and a Sink to the Pipeline. This is done in the main class StreamsApp where the blueprint of the stream is  materialized. In our case, we are using an ActorRef as a Source that only accept messages of type FlowMessage and a Sink that returns a future when the streaming finishes.

val actorSource: Source[FlowMessage, ActorRef] = Source.actorRef[FlowMessage](1000, OverflowStrategy.fail)
val pipelineActor: ActorRef = source.via(eventPipelineFlow).to(Sink.ignore).run()

pipelineActor ! message
pipelineActor ! PoisonPill


The stream can be completed successfully by sending a Status.Success or PoisonPill message to the pipelineActor. And it can be completed with failure by sending Status.Failure message. But what happens if there’s an exception while executing? Well, the actor will be stopped and the stream will be finished with a failure. As you can read in the documentation, the actor will be stopped when the stream is completed, failed or canceled from downstream.

One solution could be to create a Guardian Actor as a watcher of the Pipeline Actor and get notified when an exception occurs. But even better, Akka Stream provides a Supervision Strategy that you can use globally when the ActorMaterializer is created by passing an ActorMaterializerSettings or you can define a Supervision Strategy individually for each flow, source or sink. The error handling strategies are inspired by actor supervision strategies, so you’ll find this as a known pattern if you’ve been working with the Actor Model. We chose the restart strategy that drop the message that caused the error and create a new streaming.

val decider = ActorAttributes.supervisionStrategy(Supervision.restartingDecider)
val pipelineActor = source.via(eventPipelineFlow.withAttributes(decider)).to(Sink.ignore).run()

You can see how we tested the Resilient to Failures of the streaming using supervision strategies.


Now the project is more readable and maintainable, but we are missing a very important part: the Testing Phase. But don’t worry, we were working and designing our code always thinking in an easy way to test it and now I’m going to explain how we achieve this.

Our focus is to test every partial flow as a “black box” and finally test the whole streaming. Akka Stream provides a nice test kit and it’s quite simple. At this moment the documentation hasn’t too much examples but you always can take a deeper look into the code.

As you can see in the first post’s graphics and code we are defining a Sink.ignore within the EventInputFlow and the EventProcessorFlow. Well, this was a BIG MISTAKE. Event though it works, it’s impossible to test it as a black box with the given design. What we need is to expose the final output before connecting it to the Sink in order to be able to test it. That’s what we did and you maybe already noticed about the changes in the code.

Also we figured out that it’s a Bad Smell in Code when you need two or more inputs and two or more outputs at the same time. In this case, what you need to do is to split the partial flow in two or more smaller partial flows. That was the case of the EventProcessorFlow. Now it has one input and two outputs because we removed the input merge. Now the merge is happening withing the EventPipelineFlow.

Now let’s start testing the input and the three outputs of the EventInputFlow (that’s what I mean with “black box testing”). First of all we created a base class that every Spec will extends. It looks like this:

class StreamFlowSpec extends TestKit(ActorSystem("StreamFlowSpec"))
                     with WordSpecLike
                     with Matchers
                     with BeforeAndAfterAll {

  implicit val materializer = ActorMaterializer()

  def collector = genericCollector[FlowMessage]
  private def genericCollector[T]: Sink[T, Future[T]] = Flow[T].toMat(Sink.head)(Keep.right)

  //... COMMON TEST CODE ...


Then we have an EventInputFlowSpec that extends the base class and defines the proper unit tests:

class EventInputFlowSpec extends StreamFlowSpec {

  object EventInputMock extends EventInputFlow with EventTypeFilteredFlow

  private def flowGraph(message: FlowMessage) = FlowGraph.closed(collector, collector, collector)((_, _, _)) { implicit b => (out0, out1, out2) =>
    val eif = b.add(EventInputMock.eventInputFlow)
    Source.single(message) ~> eif
                              eif.out(0) ~> out0
                              eif.out(1) ~> out1
                              eif.out(2) ~> out2

  "Event Input Flow" should {

    val sessionHeaders = Map("MatchSession" -> 5426)

    "Have messages in the filter output" in withMessage(sessionHeaders) { message =>

      val (filterOut, notFilterOut, suppressedOut) = flowGraph(message)

      val result = Await.result(filterOut, 1000.millis)

      result.headers should contain key ("starting")

      // Should be an Empty stream
      intercept[NoSuchElementException] {
        Await.result(notFilterOut, 1000.millis)

      intercept[NoSuchElementException] {
        Await.result(suppressedOut, 1000.millis)

In the first line we are defining an object that mixes every trait that we need to test. Then we are creating a Runnable Graph (using FlowGraph.closed) that will return one future for every output. That’s why we are passing the collector function defined in the base class as a parameter that it’s actually a Sink.head[T]. And finally we defined the assertions. When an output doesn’t receive any message it throws a NoSuchElementException(“Empty stream”) if we are waiting for a value. In this case we are intercepting the exception to prove it.

The tests for the other flows are similar to this one, so I’m sure you’ll understand it after explaining this case. Just take a look to the project!

Finally, after test every flow, the last thing to do is to test the complete streaming. In this case we have the EventPipelineFlowSpec:

class EventPipelineFlowSpec extends StreamFlowSpec {

  object EventPipelineMock extends EventPipelineFlow

  private def flowGraph(message: FlowMessage) = FlowGraph.closed(collector, collector, collector)((_, _, _)) { implicit b => (out0, out1, out2) =>
    val epf = b.add(EventPipelineMock.partialEventPipeline)
    Source.single(message) ~> epf
                              epf.out(0) ~> out0
                              epf.out(1) ~> out1
                              epf.out(2) ~> out2

  "Event Pipeline Flow" should {

    val sessionHeaders = Map("MatchSession" -> 5426)

    "Have messages in the successful output" in withMessage(sessionHeaders) { message =>

      val (successfulOut, eventTypeSuppressed, eventDeletedLogger) = flowGraph(message)

      val result = Await.result(successfulOut, 1000.millis)

      result.headers should contain key ("starting")

      // Should be an Empty stream
      intercept[NoSuchElementException] {
        Await.result(eventTypeSuppressed, 1000.millis)

      intercept[NoSuchElementException] {
        Await.result(eventDeletedLogger, 1000.millis)



This is a demonstration project with the current design that we are using. However we’re still working hard, learning from the community and our mistakes, and improving our code every day to make it more reliable. And the most important thing we’re having a lot of fun here! Hope you find this post useful.

Until next one!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s