Event module


Jump to: navigation, search



Event module is intrinsic integral part of the Unify. Upon most of the actions (e.g. page created, CSS deployed, etc.), an event is sent to the internal handling routines. This module doesn't have much of configuration/tuning settings, although if this module isn't functioning, none of the core features would be able work.

Event module has been vastly refactored for version 4.4. This page only describes the behaviour relevant to version 4.4 and later.

Brief architecture notes

Picture: Event flows between servers in admin-delivery-delivery setup

Each event can be local (sent within the same cluster node), remote (sent to specified remote node) or global (sent to both local and all remote nodes). The transport used for remote communication is raw TCP socket. The ports for listening on socket are specified when server instance is created. When new server appears in the list, all existing servers create new "pipes" to the just created server.

The event module has a separate event queue per destination or in other words, each server communicates with each other server in the cluster. This means, for example, if the cluster is comprised of admin server A and delivery servers D1 and D2, A will have two separate event queues to D1 and D2, D1 will have queues to A and D2, and so forth. Such a distribution requires more running threads but guarantees better event delivery/response times if one of the cluster nodes has high load, is vaguely responding due to network errors, etc. Necessity of each-to-each node communication is based on the ability of each node to intake new data which then needs to be replicated across the cluster.

Each event can either be serializable or not. Serializable means that if a destination is unavailable, the event is stored into the database and later retrieved by node's database trawler. Good example of serializable event would be a request to deploy a page. In a case of transport failure, non-serializable event is simply dropped and discarded while serializable event is picked up as soon as the destination node becomes available. Number of different types of non-serializable events is rather low, but those events occur much more frequently (e.g. DB cache flush requests, heartbeats, etc.).

Each server needs to know the status of each other server. Therefore, each server sends heartbeat events to everyone so if server A thinks B being offline, by sending a heartbeat message, server B tells is alive again so A can start sending events to B again. Heartbeat events are very lightweight and occur every two seconds (this period is configurable) so the overhead is negligible. In order to achieve better scalability, heartbeat events might be configured to be sent using multicast, however, this the subject of future implementations.


Changing inter-server heartbeat frequency

Whatever reason, it is possible to change the default heartbeat frequency (the default is 2000ms). Simply go to Configuration->System Properties and set the new value; no server restart is necessary.

Personal tools