Cassandra Scala drivers

Hi there! After a long time of laziness I came back to write something about what I’ve done during this time of absence apart from my holidays. And BTW happy new year!

In the last year I’ve been working quite a lot with Cassandra, always using the official Java Driver supported by DataStax, the company behind the database. And I remember that I’ve searched before for a Scala driver and reactive if it was possible and I found only one but not very good to me at that moment. The thing is during this week I was challenged to create a program to basically query some of our Cassandra tables by exposing a REST API, so I started looking for a cool Scala driver.


Starting by simply type “scala cassandra driver” in Google these are the first four results:

First a reactive type-safe driver, second a Scala wrapper of the official Java driver, third the official web site driver list and last a Stack Overflow topic about it.


So I decided to give Phantom a chance because it has a high activity on GitHub, it is an open source driver and I think the DSL is pretty good. However I found a few problems trying to start working with; a basic thing like getting a connection to the database using username/password authentication is not supported by the DSL. And after research for a while I found that a lot of good features are only present in the commercial version.

That’s why after getting the problem mentioned above I kept searching for more Scala drivers and summing up this is the list:

  • scala-cassandra: just a wrapper around the official Java driver. Last activity on GitHub 2 years ago.
  • cascal: one of the drivers mentioned in the official DataStax driver list apart from Phantom. Last activity on GitHub 3 years ago.
  • cassie: developed by Twitter. Last activity on GitHub 3 years ago.
  • scqla: lack of design, no DSL. Last activity on GitHub 2 years ago.

As you can see all of this projects are no longer maintained, so once again I came back to Phantom trying to make it work.


After a few hours trying to get a connection using Phantom, including opening an issue (hopefully I’ll get a response soon), I created a project to share my workaround solution which I called phantom-ssl-extension, mainly intended to work with Java 8.

It consists on the creation of a CustomSessionProvider mixing Phantom with the official Java driver and a few util functions. I invite you to check the example out in order to see how it works and I hope you had enjoyed this short research post.

Until next time!

Future Computations: Benchmarks

This is a second part of the original post where I’m introducing real benchmarks using Java Microbenchmark Harness.

As I said before, in the first version the benchmarks were very basic using just System.currentTimeInMillis and println because the idea was to give you a basic point of view about the alternatives to the standard Scala Futures by the creation of a simple scenario (and because I didn’t have so much time to create the benchmarks properly). And I have to say that the basic benchmarks are not so far from the real ones…

This simple scenario consists in the sum of three different square roots of a double by using Futures and measure the average time taken by the three different ways to do it: Future for-comprehension, Future.sequence and async-await.


So let’s move to the interesting part. I created a branch called microbenchmark-harness where you can see the benchmarks created using sbt-jmh. Here are the results:

Running 3 iterations, 10 warmup iterations, 3 forks and 1 thread

The async-await is the fastest approach performing every operation in just 2026.783 nanoseconds followed by the future for-comprehension approach taking 7116.502 nanoseconds per operation.

Benchmark Mode Cnt Score Error Units
asyncAwaitResult avgt 9 2026.783 ±1278.719 ns/op
futureResult avgt 9 7116.502 ±1457.952 ns/op
futureSequenceResult avgt 9 11455.322 ±394.988 ns/op

Running 20 iterations, 20 warmup iterations, 10 forks and 1 thread

The async-await is again the fastest approach performing every operation in just 1999.398 nanoseconds.

Benchmark Mode Cnt Score Error Units
asyncAwaitResult avgt 200 1999.398 ±132.615 ns/op
futureResult avgt 200 6762.131 ±210.806 ns/op
futureSequenceResult avgt 200 10805.773 ±226.244 ns/op

To see the full results take a look at the README file of the project in the microbenchmark-harness branch.


I had this pending task after a few people were interested in have the results of a reliable benchmark like JMH. So here I came back with some results. I hope you like it!

Until next post!

Akka Stream 2.0-M1: Quick Update

Yesterday was announced the first milestone of the second version of Akka Stream and Http. I wanted to give it a try so I updated the previous project in order to work with this new version. You’ll find a new branch called “akka-stream-2.0-M1”.


Functions partial and closed of FlowGraph are no longer existing. They were replaced by function create. As you can see for instance in EventInputFlow the change is very straight forward in partial cases.

One of the most significant changes I had to do was in the FlowTestKit where I was using FlowGraph.closed. And as I pointed above it was replaced by the function create. But in this case it’s mandatory to return a ClosedShape. With this new object it’s not possible to invoke the run() function because it’s not runnable like previously. To make it runnable it’s necessary to use RunnableGraph.fromGraph(closedShape) and then you can invoke the run() function.

All the changes introduced in the project can be seen in this comparison with the master branch.


Another big change that I’m not using in the demo project is the wrap() function. It was replaced by descriptive functions depending on the case. If you want to create a Flow from a FlowShape you should use Flow.fromGraph but if you need to create a Flow from a Sink and Source you should use now the function Flow.fromSinkAndSource  or Flow.fromSinkAndSourceMat.

If you were using FlexiMerge and FlexiRoute for custom stream processing you’ll notice that they have been replaced by GraphStage. Take a look at the migration guide from 1.0 to 2.x to see all the changes introduced in this first milestone and if you can give it a try!


It’s promising what is coming in this new Akka Stream 2 world. The API is becoming stronger, powerful and very handy. And talking about performance it’s worth to mention that the stream processing is faster now and it will be much faster in the upcoming versions – with the introduction of GraphStage it will be possible to execute multiple stream processing steps inside one actor, reducing the numbers of thread-hops -. Other libraries such as Akka Http are getting improvements too thanks to Akka Stream joint efforts.

Until next post!

Akka Cluster 2.4

In the last month I’ve been working with Akka Cluster 2.3 and now we migrated to the version 2.4. We chose Cassandra for persistence.

In this post I’ll try to explain what I learned and the main feature of the version 2.4 that we use: Shard rebalancing (2.3) and Remember entities.


A cluster is just a group of nodes, being a node just a logical member with it’s own Actor System. So for instance the following image shows a cluster of two nodes:



I created a simple demo project that demonstrates the use of the remember entities feature, for me one of the most attractive in the version 2.4.

Note: Please use Apache Cassandra 2.0.16 or 2.1.6 or higher to avoid this bug. Otherwise is not going to work properly.

In this project we have an Entity Actor that is a persistence actor sharded in the cluster and a Message Generator (simple Actor) that generates two messages every minute (so we have time to shutdown nodes and check the behavior). I named the two nodes Dc1 and Dc2. They are configured as seed-nodes. The seed nodes are part of the cluster from the very beginning (initial state of the cluster) being the oldest one who becomes the Leader. Later on more nodes can join the cluster. They only need to know at least one of the seed nodes. Read more about joining nodes here.

There are two good features to test in this simple demo: Shard rebalancing and Remember entities. Both cases happen when nodes become Up and / or Down. So let’s start with the first case!


First we have to start Dc1Cluster App and wait for the first two messages. I added some ‘println’ on the functions preStart and postStop of the EntityActor to see when they are called. We will see that preStart is called after Dc1 starts successfully and then it consumes the two messages. So now is the moment to start Dc2. Once it’s started we will see that postStop is called on Dc1 but preStart is not called on Dc2 until a new message is received. Now if we wait to receive two more messages we will see that one message is received by Dc1 and the other one by Dc2. This means that the rebalancing is working well and the remember entities feature is not activated as in version 2.3.

Now we can shutdown Dc2 or Dc1. In the first case postStop will be called in Dc2 and preStart in Dc1 and later on Dc1 will receive the two messages. But maybe is more interesting to shutdown Dc1 and see a similar behavior with the addition that Dc2 will become the leader.

Take a look at the commented logs for DC1 and DC2 for this case of remember entities off.


This feature is similar to the shard rebalancing case being the main difference that the EntityActor is restarted automatically in a rebalancing scenario. Without this function the preStart in the node will be called only in the case of a new incoming message. The property is configured in the application.conf as akka.cluster.sharding.remember-entities with possible values on/off. To test it follow the previous case by turning on this property and by comparing the behavior. You will see that preStart is called immediately after see postStop in the other node. Here are the docs.


When I say to shutdown a node I mean to kill the process. In this case the property plays it’s role. In the project it is configured to 10 seconds. This means that after that time the leader will remove the down node from the cluster and the shard rebalancing will happen.

But in this new version of Akka we can perform a Graceful Shutdown. In the project I created a MBean that exposes a JMX resource that we can call later to shutdown the node gracefully. The easiest way to do it is by using JConsole and click the button “leaveClusterAndShutdown”. In this case the shard rebalancing will happen first and then the down node will be removed from the cluster.

Java Monitoring & Management Console_024

See the logs for the case of a Graceful Shutdown combined with Remember Entities ON.


All the projects that I found on the web are simple examples starting all the nodes in the same machine. But I’m pretty sure that you want to try using different machines. For this purpose you will need to configure akka.cluster.seed-nodes and akka.remote.netty.tcp properly. After that you can get the same basic example working across a real network. For instance if you want to start Node 1 in Machine 1 with IP and Node 2 in Machine 2 with IP (see the image below), having both machines on the same network, this would be the configuration for the Node 1:


akka {
  remote = {
  enabled-transports = ["akka.remote.netty.tcp"]
    netty.tcp {
      hostname = ""
      port = 2551
      bind-hostname = ""

  cluster {
    seed-nodes = [

And for Node 2 you only need to change the IP address in akka.remote.netty.tcp.


It’s very interesting what you can achieve by using Akka actors sharded accross the network. You can scale up and out easily. In the next weeks we are going live with this new feature and I’m really excited!

Until next post!

HTTP API on top of Scalaz Streams

In the last days I’ve been playing with http4s which is defined as a minimal, idiomatic Scala interface for HTTP.

It’s a powerful library, type safe, composable and asynchronous. And it supports different servers like Blaze, Jetty and Tomcat.

Although the project remains a lot of work it’s always good to give it a try.


I started creating a basic service that returns a simple string when accessing the root level (localhost:8080/). This is how it looks:

object HomeService {

  def apply(): HttpService = service

  private val service = HttpService {
    case GET -> Root =>
      Ok("Http4s API")


And here is the main application that runs the Blaze server:

object Api extends App {

    .mountService(HomeService(), "/")


Fair enough to get a server running and serving a GET resource. Until here we have the “hello world” example of an HTTP service. After this we can create more services and add them to the server by invoking the mountService function of the server builder.

What I did was to create two similar services for Products and Users serving just mocking data. However the main difference is that the ProductService serves Json data using Play Json and the UserService exposes Json data using Circe.


This is the code for one of the GET resources for the ProductService:

case GET -> Root =>
  val products = List(Product(1, "Book"), Product(2, "Calc"), Product(3, "Guitar"))

It’s very handy. However to get this code working you need some implicit values in the scope:

  • The Writer for the Play Json library.
  • The EntityEncoder[T] for http4s.

To accomplish this requirements I created the following object that is imported in the ProductService:

object PlayJsonImplicits {

  implicit val playJsonEncoder: EntityEncoder[JsValue] =
      .contramap { json: JsValue => json.toString() }
      .withContentType(`Content-Type`(MediaType.`application/json`, Charset.`UTF-8`))

  implicit val productJsonFormat = Json.format[Product]



This is the code for one of the GET resources of the UserService:

case GET -> Root / id =>
  Ok(User(id.toLong, s"User$id", s"user$").asJson)

And here it happens something similar to the service above. In this case we need to import the Circe implicit values and to provide an EntityEncoder[T]. This is how it looks:

import io.circe.syntax._
object CirceImplicits {

  implicit val circeJsonEncoder: EntityEncoder[CirceJson] =
      .contramap { json: CirceJson => json.noSpaces }
      .withContentType(`Content-Type`(MediaType.`application/json`, Charset.`UTF-8`))


Until here we have a few services serving Json data but the most attracting feature of this library is the streaming one. So let’s move on some examples.


Http4s was built on top of Scalaz Streams and every Request is transformed into an asynchronous Scalaz Task[Response]. This means that you can use any function that returns an async Task as a Http Response using the helpers provided by http4s.

Here we have an example extracted from the StreamingService:

private val service = HttpService {
  case GET -> Root =>
    val streamingData = Process.emit(s"Starting stream intervals\n\n") ++ dataStream(10)

private def dataStream(n: Int): Process[Task, String] = {
  implicit def defaultScheduler = DefaultTimeoutScheduler
  val interval = 100.millis
    .map(_ => s"Current system time: ${System.currentTimeMillis()} ms\n")

To send the chunked response we need to add the Transfer Encoding header as below. What I did was to create an implicit class with a “chunked” function to make thinks easier.


And that’s it! Find out more and what is possible to do with Scalaz Streams by taking a look at the examples.

Another cool feature of this library is the Web Sockets support, but I’m not covering this topic now. However you’ll find a very basic example of WS connection in the sample project linked below. And maybe you want to take a look at this demo too.


As always this development stage is for me the most important. That’s why all the services are fully tested with a test coverage close to ~100% (actually Coveralls has some bugs and it’s showing only 92% but take a look at the coverage report! If you run “sbt clean coverage test” the coverage report shows 97.78%).

This is how it looks one of the unit tests for the ProductService:

"Get the list of products" in {
  val request = new Request()
  val response =

  response.status should be (Status.Ok)
  val expected = """ [{"id":1,"name":"Book"},{"id":2,"name":"Guitar"}] """.trim
  response.body.asString should be (expected)

We are creating a Request and running the ProductService to get the response. Then we have assertions for the Status and the Body.


Check out the complete project on Github!.

Important Note: Only runs under Java 8.

Find out more examples in the official http4s ExampleService.


At this moment the documentation it’s a bit poor but I hope to find a better one in the future and many other improvements. Nevertheless I know the guys are working really hard on this powerful library.

Well, this was just a quick research, you are always invited to go deeper and deeper!

Until next post!

Future Computations

Today I want to discuss about an interesting topic regarding Scala Futures. As the official documentation says, “Futures provide a nice way to reason about performing many operations in parallel– in an efficient and non-blocking way”.

But if you are familiar with Futures you maybe already know that when using for-comprehensions the chain of futures is executed sequentially and not in parallel. This is useful when the order of execution matters or when a partial result depends on another partial result like in this case:

val future = for {
  a <- webServiceCallFuture
  b <- databaseCallFuture(a)
  c <- Future(0.35 * b)
} yield c

However if the order of execution doesn’t matter you probably want to seize the power of parallelism. To achieve this goal the standard Scala library provides functions such as Future.traverse or Future.sequence as follows:

val future = Future.sequence(Seq(f1, f2, f3))
future onComplete {
  case Success(result) => //Do something with $result
  case Failure(e) => println(s"ERROR: ${e.getMessage}")

The signature of the sequence function is this one:

/** Simple version of `Future.traverse`. Transforms a `TraversableOnce[Future[A]]` into a `Future[TraversableOnce[A]]`.
 *  Useful for reducing many `Future`s into a single `Future`.
def sequence[A, M[X] <: TraversableOnce[X]](in: M[Future[A]])(implicit cbf: CanBuildFrom[M[Future[A]], A, M[A]], executor: ExecutionContext): Future[M[A]]

Which in simple words it just the conversion of a Sequence of Futures to a single Future, for instance List[Future[String]] to Future[List[String]]. But in simple scenarios the performance of this function is really bad as you can see in the benchmark results below because of the conversions that performs.

Introducing Scala Async

Scala Async is a library which the main goal is to deal with parallel computation of Scala Futures. It’s based on Scala Macros so the code is analyzed and re-generated in compilation time.

This is how an example using it looks like:

def combined: Future[Int] = async {
  val future1 = slowCalcFuture
  val future2 = slowCalcFuture
  await(future1) + await(future2)

The async approach has two advantages over the use of map and flatMap that is how the for-comprehension works.

  1. The code more directly reflects the programmers intent, and does not require us to name the results on the yield part. This advantage is even more pronounced when we mix control structures in async blocks.
  2. async blocks are compiled to a single anonymous class, as opposed to a separate anonymous class for each closure required at each generator (<-) in the for-comprehension. This reduces the size of generated code, and can avoid boxing of intermediate results.

Similar Projects

While I’ve been doing my research I found a few similar projects:

  • Computation Expressions: It’s a promising project based on the Expressions concept introduced by the F# language but it’s not released yet and it seems that don’t have much activity.
  • Effectful: The idea is similar to Scala Async library, but generalized to arbitrary Monads (not just Future).
  • Scala Workflow: helps to nicely organize applicative and monadic computations but it last activity was one year ago and it’s based on Untyped Macros that are actually deprecated.

Demonstration Project

For comparison purposes I created a demonstration project on GitHub including the following Benchmark Results that shows the average time performing the sum of three simple Math.sqrt operations:

Future Sequence

  • 17.3 ms (16 15 20 15 13 14 12 26 22 20)

Future for-comprehension

  • 13.4 ms (17 9 8 11 14 17 8 19 12 19)


  • 6 ms (6 9 5 5 6 8 5 6 5 5)

The machine used to run this benchmark is an Intel® Xeon(R) CPU X5687 @ 3.60GHz × 4 with 23.5 GiB of memory running on Ubuntu 14.04 LTS 64 bits and Java 8.


The demonstration project is just the most simple case that you can create using simple math operations but imagine what you can do by taking this approach to a big scale. For instance for the execution of slow operations like web service and database calls. The difference might be huge…

Until next post!

Akka Streams: Designing a full project

In the previous post I talked about how we use Akka Streams as a replacement for the Spring Integration behavior. Now I’ll show you how we solve some problems and remove boilerplate code by extracting common design patterns. In order to start, I created a demonstration project that you can find on Github.

The first thing we did was to create an utils object extracting the code to create all the common Partial Flow Graphs. And we got the object PartialFlowGraphUtils.


We said before that every time we have a Filter we need a Broadcast of two outputs (in our case, because we are replacing the behavior of the Spring Integration Filter that has a Discard Channel), so we defined a method to create the FanOutShape as a Partial Flow Graph:

def filterPartialFlow(filterFunction: FlowMessage => Boolean) = FlowGraph.partial() { implicit b =>
  val bcast = b.add(Broadcast[FlowMessage](2))
  val filter = b.add(Flow[FlowMessage] filter (filterFunction(_)))
  val notFilter = b.add(Flow[FlowMessage] filter (!filterFunction(_)))

  bcast ~> filter
  bcast ~> notFilter

  UniformFanOutShape(, filter.outlet, notFilter.outlet)

Also we defined a function to create a FlowShape:

def partialFlow(function: FlowMessage => FlowMessage) = Flow[FlowMessage] map (function(_))

And finally a FlowShape with a function that adds Headers to the FlowMessage:

def partialFlowWithHeader(header: MessageHeader) = partialFlow(fm => addHeader(fm, header))

def addHeader(message: FlowMessage, header: MessageHeader): FlowMessage = {
  val headers = message.headers + (header.key, header.value)
  message.copy(headers, message.event)


Once we had the Utils object we created a Trait for every partial flow. This is how it looks the EventInputFlow:

trait EventInputFlow {

  this: EventTypeFilteredFlow =>

  lazy val eventInputFlow = FlowGraph.partial() { implicit b =>
    val headersProcess = b.add(partialFlowWithHeader(MessageHeader("starting", System.currentTimeMillis())))

    val eventTypeFilterFlow = b.add(filterPartialFlowGraph(_.event.`type` == "TENNIS"))
    val headersFilterFlow = b.add(filterPartialFlowGraph(_.headers.contains("MatchSession")))
    val eventTypeFiltered = b.add(eventTypeFilteredFlow)

    headersProcess ~> eventTypeFilterFlow
                      eventTypeFilterFlow.out(0) ~> headersFilterFlow
                      eventTypeFilterFlow.out(1) ~> eventTypeFiltered

    UniformFanOutShape(headersProcess.inlet, headersFilterFlow.out(0), headersFilterFlow.out(1), eventTypeFiltered.outlet)


Can you see the differences regarding the last definition of this flow? Clearly the code is much better now.

The others flows have a similar design to the EventInputFlow (just take a look at the project). And as you can see here, the EventInputFlow depends on EventTypeFilteredFlow (indicated by the self type reference this: Type =>), so we need to provide it. It’s kinda a Thin Cake Pattern. The responsible for the dependency injection that put all the pieces together is the EventPipelineFlow:

trait EventPipelineFlow extends EventInputFlow
                        with HeadersValidationFlow
                        with EventTypeFilteredFlow
                        with EventProcessorFlow {

  lazy val eventPipelineFlow = FlowGraph.partial() { implicit b =>
    val pipeline = b.add(partialEventPipeline)
    pipeline.out(1) ~> Sink.ignore
    pipeline.out(2) ~> Sink.ignore

    FlowShape(, pipeline.out(0))

  lazy val partialEventPipeline = FlowGraph.partial() { implicit b =>
    val eventInput = b.add(eventInputFlow)
    val headersValidation = b.add(headersValidationFlow)
    val processorMerge = b.add(Merge[FlowMessage](2))
    val eventProcessor = b.add(eventProcessorFlow)

    eventInput.out(0) ~> processorMerge
    eventInput.out(1) ~> headersValidation ~> processorMerge
    processorMerge ~> eventProcessor

    UniformFanOutShape(, eventProcessor.out(0), eventInput.out(2), eventProcessor.out(1))



Now that we defined all the flows we need to connect a Source and a Sink to the Pipeline. This is done in the main class StreamsApp where the blueprint of the stream is  materialized. In our case, we are using an ActorRef as a Source that only accept messages of type FlowMessage and a Sink that returns a future when the streaming finishes.

val actorSource: Source[FlowMessage, ActorRef] = Source.actorRef[FlowMessage](1000,
val pipelineActor: ActorRef = source.via(eventPipelineFlow).to(Sink.ignore).run()

pipelineActor ! message
pipelineActor ! PoisonPill


The stream can be completed successfully by sending a Status.Success or PoisonPill message to the pipelineActor. And it can be completed with failure by sending Status.Failure message. But what happens if there’s an exception while executing? Well, the actor will be stopped and the stream will be finished with a failure. As you can read in the documentation, the actor will be stopped when the stream is completed, failed or canceled from downstream.

One solution could be to create a Guardian Actor as a watcher of the Pipeline Actor and get notified when an exception occurs. But even better, Akka Stream provides a Supervision Strategy that you can use globally when the ActorMaterializer is created by passing an ActorMaterializerSettings or you can define a Supervision Strategy individually for each flow, source or sink. The error handling strategies are inspired by actor supervision strategies, so you’ll find this as a known pattern if you’ve been working with the Actor Model. We chose the restart strategy that drop the message that caused the error and create a new streaming.

val decider = ActorAttributes.supervisionStrategy(Supervision.restartingDecider)
val pipelineActor = source.via(eventPipelineFlow.withAttributes(decider)).to(Sink.ignore).run()

You can see how we tested the Resilient to Failures of the streaming using supervision strategies.


Now the project is more readable and maintainable, but we are missing a very important part: the Testing Phase. But don’t worry, we were working and designing our code always thinking in an easy way to test it and now I’m going to explain how we achieve this.

Our focus is to test every partial flow as a “black box” and finally test the whole streaming. Akka Stream provides a nice test kit and it’s quite simple. At this moment the documentation hasn’t too much examples but you always can take a deeper look into the code.

As you can see in the first post’s graphics and code we are defining a Sink.ignore within the EventInputFlow and the EventProcessorFlow. Well, this was a BIG MISTAKE. Event though it works, it’s impossible to test it as a black box with the given design. What we need is to expose the final output before connecting it to the Sink in order to be able to test it. That’s what we did and you maybe already noticed about the changes in the code.

Also we figured out that it’s a Bad Smell in Code when you need two or more inputs and two or more outputs at the same time. In this case, what you need to do is to split the partial flow in two or more smaller partial flows. That was the case of the EventProcessorFlow. Now it has one input and two outputs because we removed the input merge. Now the merge is happening withing the EventPipelineFlow.

Now let’s start testing the input and the three outputs of the EventInputFlow (that’s what I mean with “black box testing”). First of all we created a base class that every Spec will extends. It looks like this:

class StreamFlowSpec extends TestKit(ActorSystem("StreamFlowSpec"))
                     with WordSpecLike
                     with Matchers
                     with BeforeAndAfterAll {

  implicit val materializer = ActorMaterializer()

  def collector = genericCollector[FlowMessage]
  private def genericCollector[T]: Sink[T, Future[T]] = Flow[T].toMat(Sink.head)(Keep.right)

  //... COMMON TEST CODE ...


Then we have an EventInputFlowSpec that extends the base class and defines the proper unit tests:

class EventInputFlowSpec extends StreamFlowSpec {

  object EventInputMock extends EventInputFlow with EventTypeFilteredFlow

  private def flowGraph(message: FlowMessage) = FlowGraph.closed(collector, collector, collector)((_, _, _)) { implicit b => (out0, out1, out2) =>
    val eif = b.add(EventInputMock.eventInputFlow)
    Source.single(message) ~> eif
                              eif.out(0) ~> out0
                              eif.out(1) ~> out1
                              eif.out(2) ~> out2

  "Event Input Flow" should {

    val sessionHeaders = Map("MatchSession" -> 5426)

    "Have messages in the filter output" in withMessage(sessionHeaders) { message =>

      val (filterOut, notFilterOut, suppressedOut) = flowGraph(message)

      val result = Await.result(filterOut, 1000.millis)

      result.headers should contain key ("starting")

      // Should be an Empty stream
      intercept[NoSuchElementException] {
        Await.result(notFilterOut, 1000.millis)

      intercept[NoSuchElementException] {
        Await.result(suppressedOut, 1000.millis)

In the first line we are defining an object that mixes every trait that we need to test. Then we are creating a Runnable Graph (using FlowGraph.closed) that will return one future for every output. That’s why we are passing the collector function defined in the base class as a parameter that it’s actually a Sink.head[T]. And finally we defined the assertions. When an output doesn’t receive any message it throws a NoSuchElementException(“Empty stream”) if we are waiting for a value. In this case we are intercepting the exception to prove it.

The tests for the other flows are similar to this one, so I’m sure you’ll understand it after explaining this case. Just take a look to the project!

Finally, after test every flow, the last thing to do is to test the complete streaming. In this case we have the EventPipelineFlowSpec:

class EventPipelineFlowSpec extends StreamFlowSpec {

  object EventPipelineMock extends EventPipelineFlow

  private def flowGraph(message: FlowMessage) = FlowGraph.closed(collector, collector, collector)((_, _, _)) { implicit b => (out0, out1, out2) =>
    val epf = b.add(EventPipelineMock.partialEventPipeline)
    Source.single(message) ~> epf
                              epf.out(0) ~> out0
                              epf.out(1) ~> out1
                              epf.out(2) ~> out2

  "Event Pipeline Flow" should {

    val sessionHeaders = Map("MatchSession" -> 5426)

    "Have messages in the successful output" in withMessage(sessionHeaders) { message =>

      val (successfulOut, eventTypeSuppressed, eventDeletedLogger) = flowGraph(message)

      val result = Await.result(successfulOut, 1000.millis)

      result.headers should contain key ("starting")

      // Should be an Empty stream
      intercept[NoSuchElementException] {
        Await.result(eventTypeSuppressed, 1000.millis)

      intercept[NoSuchElementException] {
        Await.result(eventDeletedLogger, 1000.millis)



This is a demonstration project with the current design that we are using. However we’re still working hard, learning from the community and our mistakes, and improving our code every day to make it more reliable. And the most important thing we’re having a lot of fun here! Hope you find this post useful.

Until next one!