Akka Cluster 2.4

In the last month I’ve been working with Akka Cluster 2.3 and now we migrated to the version 2.4. We chose Cassandra for persistence.

In this post I’ll try to explain what I learned and the main feature of the version 2.4 that we use: Shard rebalancing (2.3) and Remember entities.

DEFINITION

A cluster is just a group of nodes, being a node just a logical member with it’s own Actor System. So for instance the following image shows a cluster of two nodes:

cluster

DEMO PROJECT

I created a simple demo project that demonstrates the use of the remember entities feature, for me one of the most attractive in the version 2.4.

Note: Please use Apache Cassandra 2.0.16 or 2.1.6 or higher to avoid this bug. Otherwise is not going to work properly.

In this project we have an Entity Actor that is a persistence actor sharded in the cluster and a Message Generator (simple Actor) that generates two messages every minute (so we have time to shutdown nodes and check the behavior). I named the two nodes Dc1 and Dc2. They are configured as seed-nodes. The seed nodes are part of the cluster from the very beginning (initial state of the cluster) being the oldest one who becomes the Leader. Later on more nodes can join the cluster. They only need to know at least one of the seed nodes. Read more about joining nodes here.

There are two good features to test in this simple demo: Shard rebalancing and Remember entities. Both cases happen when nodes become Up and / or Down. So let’s start with the first case!

SHARD REBALANCING

First we have to start Dc1Cluster App and wait for the first two messages. I added some ‘println’ on the functions preStart and postStop of the EntityActor to see when they are called. We will see that preStart is called after Dc1 starts successfully and then it consumes the two messages. So now is the moment to start Dc2. Once it’s started we will see that postStop is called on Dc1 but preStart is not called on Dc2 until a new message is received. Now if we wait to receive two more messages we will see that one message is received by Dc1 and the other one by Dc2. This means that the rebalancing is working well and the remember entities feature is not activated as in version 2.3.

Now we can shutdown Dc2 or Dc1. In the first case postStop will be called in Dc2 and preStart in Dc1 and later on Dc1 will receive the two messages. But maybe is more interesting to shutdown Dc1 and see a similar behavior with the addition that Dc2 will become the leader.

Take a look at the commented logs for DC1 and DC2 for this case of remember entities off.

REMEMBER ENTITIES

This feature is similar to the shard rebalancing case being the main difference that the EntityActor is restarted automatically in a rebalancing scenario. Without this function the preStart in the node will be called only in the case of a new incoming message. The property is configured in the application.conf as akka.cluster.sharding.remember-entities with possible values on/off. To test it follow the previous case by turning on this property and by comparing the behavior. You will see that preStart is called immediately after see postStop in the other node. Here are the docs.

SHUTTING DOWN A NODE

When I say to shutdown a node I mean to kill the process. In this case the property akka.cluster.auto-down-unreachable-after plays it’s role. In the project it is configured to 10 seconds. This means that after that time the leader will remove the down node from the cluster and the shard rebalancing will happen.

But in this new version of Akka we can perform a Graceful Shutdown. In the project I created a MBean that exposes a JMX resource that we can call later to shutdown the node gracefully. The easiest way to do it is by using JConsole and click the button “leaveClusterAndShutdown”. In this case the shard rebalancing will happen first and then the down node will be removed from the cluster.

Java Monitoring & Management Console_024

See the logs for the case of a Graceful Shutdown combined with Remember Entities ON.

WORKING ACROSS TWO MACHINES

All the projects that I found on the web are simple examples starting all the nodes in the same machine. But I’m pretty sure that you want to try using different machines. For this purpose you will need to configure akka.cluster.seed-nodes and akka.remote.netty.tcp properly. After that you can get the same basic example working across a real network. For instance if you want to start Node 1 in Machine 1 with IP 192.168.7.1 and Node 2 in Machine 2 with IP 192.168.7.2 (see the image below), having both machines on the same network, this would be the configuration for the Node 1:

cluster-machines

akka {
  remote = {
  enabled-transports = ["akka.remote.netty.tcp"]
    netty.tcp {
      hostname = "192.168.7.1"
      port = 2551
      bind-hostname = "192.168.7.1"
    }
  }

  cluster {
    seed-nodes = [
      "akka.tcp://KlasterSystem@192.168.7.1:2551",
      "akka.tcp://KlasterSystem@192.168.7.2:2551"]
  }
}

And for Node 2 you only need to change the IP address in akka.remote.netty.tcp.

CONCLUSION

It’s very interesting what you can achieve by using Akka actors sharded accross the network. You can scale up and out easily. In the next weeks we are going live with this new feature and I’m really excited!

Until next post!
Gabriel.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s