a world of insight in our

Media Center

We have crafted solutions for industry leaders, combining years of expertise with agility and innovation.

REST(ful) in peace …

We obviously all love REST, and it’s become the standard, least-surprising choice to make when building any type of service these days – be it monolithic, microservice, web, or whatever.

However, a fundamental change in the way we build new systems is approaching. It’s already starting to emerge in the hyperscale enterprise space, and it’s starting to make its way into the workspaces of run-of-the-mill software projects. This fundamental change is called “streaming” or “event-driven” (and earlier maybe “CQRS” or “event sourcing”). I’m just going to stick with streaming throughout this post.

Streaming architecture really marks the beginning of a new age in software architecture, definitively putting a dagger into the back of the age of the microservice architecture, which we could really call the pinnacle of the services-oriented architecture period in the history of system architecture.

Streaming is seriously disruptive. It changes everything. Everything. Be prepared to re-think basic concepts like stateless, databases and even the concept of request/response itself – because these are fundamentally at odds with Streaming.

And guess what? Life is better. And simpler. And scalable. But also different – so you are allowed to feel uncomfortable! I think we, as software developers, are generally OK with technology changing and evolving rapidly, but it is less frequent that fundamental design patterns are disrupted, and we have to start again; re-programming our neural pathways to think and solve problems in a completely new way.

If you’re like me, you’ve kind of always known that services-oriented architecture has been broken, but you’ve made do: it was better than nothing. You happily built XSD/WSDL-based SOAP services. You were happy when REST came along because it dropped a bunch of evil XML remote procedure calls, and introduced the separation of noun from verb (resource and method) with the trade-off that you could no longer programmatically manage service schema (but nobody cared – XML sucked!).

You jumped on microservices when it appeared, because it promised to make our services scalable and simpler! But it lied … life as a microservice software developer quickly became complicated. To be a useful developer you now needed a team of highly skilled DevOps, Kubernetes, Netflix OSS, SpringBoot, Nginx, Vault, Helm, Spinaker, ElasticSearch, Prometheus, Cassandra and Mongo unicorns as support before you could write and test your first most basic service. The foundations required for the old “Hello, World” service in a hyper-scale microservice architecture are pretty daunting.

So why does a microservice architecture need handfuls of all these various technologies and stacks to make it work? My gut says that there must be something fundamentally broken that we are building upon; something at the core of what we use that doesn’t scale. And so all these additional technologies and stacks we throw into the mix are really just masks. Maybe they shift the bottlenecks around, or spread them over a wider surface, but they never really get rid of them. And that seems OK to most people – it makes for busy development teams, rockstar architecture teams, brilliant and hard-to-find devops teams – but it all seems like wasted energy, and we haven’t even written a single line of valuable code yet. My theory is that we can trace the blame right back to one very simple little tool: the concept of Request/Response.

(Figure 1 – The poison)

I think that streaming takes us back to the beginning. Back to basics, but built upon a fundamentally distributed architecture, so all those hyper-scale foundations come mostly for free. Just build simple stuff using the distributed primitives provided. However, not all the primitives you have previously enjoyed and come to love are available. Stuff like our old faithful request/response. And stuff like our trusty databases. And therefore, stuff like RESTful, or HTTP/1.

The streaming primitive is something called The Event.

(Figure 2 – The remedy)

The interesting thing about The Event is that it has no particular direction. It doesn’t move from one place to another, it’s not requesting anything, nor is it in response to anything. It just pops into existence. Anywhere. And whoever might be interested in it can consume it. However, consuming The Event doesn’t destroy it. The Event hangs around persistently, because anyone else might want to consume it too. Even more interestingly, it might be consumed by someone in the future who doesn’t yet exist, and who may not even yet be conceived of.

Even more exciting is that The Event is composable. This means that you are able to create higher-level events from the composition of lower-level events – sort of like Lego. This was infeasible in the request/response world. At a microservices level it is really impossible to leverage composition – a request/response pair simply cannot be glued to another request/response. They are un-glueable. Often the best we could do is what is called “orchestration” (which is really just an abstraction around the order that particular requests and responses should be made by one particular service) but really you were not able to take a number of microservices and compose them to create a new higher-level service. Not without a lot of pain and suffering along the way, and the juice is just not worth the squeeze.

(Figure 3 – Composition of events to form a higher-level event)

Composition is a key property of functional programming, and if you are already a functional programmer (or at least think about stuff in a functional way) then streaming is a dream come true. It’s really the architecture that functional programmers have been looking for. It eradicates all the frustration and conflict felt when dealing with external services or external data stores (such as RDBMSs) which are fundamentally non-composable and non-immutable in nature. Streaming relies on a “durable, immutable commit log” at its core – and if that makes sense to you you’re probably at least already halfway there.

(Figure 4 – Commit log materialising multiple table structures)

The commit log structure is far superior to a regular database table structure, and is really a higher-level structure that can be used to easily derive a table-like structure. It’s a super-structure. In fact, a table-like structure is just one of the types of derived structures you can produce from the commit log (by simply reducing and merging the latest commits). There are also a bunch of other interesting structures you can derive, such as a group-by (as in the example), time-based windows, or some sort of aggregation (sum, mean, avg, count, etc.) Of course, the real power of this structure is the practical realisation of time travel, and the ability to calculate (or materialise) the state of the world at any point along the event timeline. No state change is discarded, it is simply appended to.

This power over time is really a central theme of streaming, giving us a powerful tool for dealing with distributed data. The database world struggles with creating distributed data systems because it fundamentally does not embrace time, nor does it embrace immutability. We therefore have this very difficult situation where we try to reconcile a distributed set of datastores being mutated independently and in an unsynchronized way. This makes for an impossibly hard problem which I would propose really isn’t solved today. Stuff like log shipping and eventual consistency might get us some of the way there. And possibly something like Google Spanner and its use of atomic clocks to keep time atomically synchronized gives us the ability to tie-break two conflicting changes. But that seems like overkill, and probably not a practical general pattern (unless we commit to shipping an atomic clock with each of our products!)

Commit logs are not a new concept, and are already used internally by popular database technologies to allow concepts such as “transactions” (ability to rollback or commit) and “replication” (through techniques such as log shipping). However, the database service itself does not expose the commit log to the outside world; it’s simply used to retain commits transiently before records get processed into the mutable table structure. And therein lies the fault in database design: considering the commit log a transient structure, and considering the materialised tables the primary persistent structure.

(Figure 5 – Database transient commit log)

Streaming takes an alternative view, promoting the commit log as the primary data structure, offering persistence and durability in addition to the fact that it’s already a distributed structure. As we already know, the flattened data structure can be recalculated (re-materialised) at any point along the timeline of the commit log and can therefore be considered a second-class citizen within streaming. Of course, it would be impractical to constantly re-materialise these views, so there are techniques to make this less cumbersome, but fundamentally the shift is away from considering these as the master structures.


(Figure 6 – Streaming persistent commit log)

So what does a stream look like? Conceptually it’s a simple list structure holding records of key=value. Producers append records to the end of the stream and Consumers consume, starting from the beginning of the stream, each consumer having a persistent offset into the stream. Multiple consumers can consume from the same stream, each having a different offset into the same stream. And each consumer can be scaled up independently depending on the workload.

(Figure 7 – Conceptual view of a Stream)

Physically however, streams are a bit more complicated. Due to their distributed nature, they need to split and replicate records across multiple partitions. Each partition can be physically relocated to separate nodes in the infrastructure, and each can have different replication properties. As with any distributed data structure there are a number of constraints and guarantees that any developer must be aware of when designing a streaming application. These generally relate to ordering and locality of data within a particular partition as opposed to within the stream, and typically the record key must be used by the developer to manipulate which partition a particular record might land in. Although this may seem quite foreign initially, this is just part of understanding the paradigm and a necessary tool for designing streaming applications.

So events within a stream can be composed, but composition can also occur at the higher stream processing level, where a collection of individual consumers and producers can be clustered together and encapsulate their inner workings, exposing only the streams consumed and produced by the composed cluster.

(Figure 8 – A composition of stream processors)

Composing stream processors does not make the cluster any less distributed, and still allows for independent scaling of individual consumer/producers within the cluster, meaning that you can eradicate bottlenecks at a very fine-grained level. The ability to create ACID-like properties between stream processors is of course also possible, typically through use of particular streaming design patterns such as write-once and idempotency, although some more mature streaming technologies such as Apache Kafka now allow for things like exactly-once guarantees and even transactional rollbacks.

With any new disruptive technology there is often a plethora of wildly different views, often based on individual context and personal perspectives. So I went around and asked a bunch of different people how they would describe streaming and what they thought a streaming architecture looked like. Keep in mind, all of these people had had experience with building applications on top of streaming platforms (typically either Apache Kafka, AWS Kinesis or Apache Spark Streaming).

(Figure 9 – Saša’s Washing Machine)

Saša’s view of streaming is that it allows you to model the world (or enterprise, or platform) as a washing machine. Events swirl around inside the drum and multiple chains of events trigger and fire endlessly, incorporating external events into the mix (fabric softener) and generating events that may be consumed externally (soapy water).

(Figure 10 – Saša’s Laundromat)

Once again you could easily reason about the washing machine in a way that would allow composition of multiple washing machines (a laundromat) to form. In reality this could represent things like divisional units, or products within a larger enterprise, and of course you could compose the entire laundromat, linking multiple laundromats together into a higher-level thing. Turtles, all the way down.

(Figure 11 – The data scientists’ neural stream)

Talking to our data-science team about streaming got a few blank stares, but most of them weren’t uncomfortable with the concept. One of the guys from my team describes a streaming architecture as a very simple neural network, composed of input layers represented by ingress streams, multiple hidden layers abstracting the business logic of the application, and producing an output layer represented by a number of streams that can be consumed by external systems. Of course things like recurrency can be represented within the streaming architecture in a way analogous to back-propagation – so really not too unfamiliar to people who work with neural networks.

(Figure 12 – Marais’ Queue/Store Duality)

Marais sees streaming as a computational graph database where streams themselves take on a duality of information flow (analogous to a queue) and data store (analogous to a database). Observation of the stream by a particular consumer will collapse the stream (for that consumer) into either form. And that sounds pretty reasonable to me, though I doubt you could ever compare any other architecture to the particle/wave duality of quantum mechanics!

(Figure 13 – Paul’s Streaming Smart Contract)

Paul – who is obsessed with blockchain – sees streaming as a platform for building centralised, simple forms of smart-contracts. Although blockchain is really disruptive it often doesn’t pass a number of acid-tests for incorporation into an enterprise setting, and the cons can often outweigh the pros for enterprise adoption. Paul sees streaming architecture as the ability to create a number of immutable ledgers, where smart contracts (represented by stream processors) process entries from the ledger and update other upstream immutable ledgers. A perfect fit.

(Figure 14 – Tjaard’s Symmetrical Flat Event Architecture)

Tjaard takes a more practical and “digital channels” view of streaming, concerning himself with how events may impact end-users of an application. He sees the world as a series of interconnected event stores, one large one centralised in the enterprise (the traditional back-end) and many smaller ones resident on each end-users device (browser, mobile, etc.) This allows the front-end application to be built exclusively upon the event store resident on the customer device, unlocking features such as offline use and the ability to cache and progressively load content, but of course allowing for events to flow in either direction (down from the device and up from the back-end) when connectivity exists between the two. This is really a beautiful thing in practice: a full web or mobile application driven exclusively upon events, without a single request/response interaction required.

In conclusion I find it really exciting to get five very different views of what streaming architecture looks like. I don’t think you could get such wildly different views from senior people describing a traditional microservices architecture. Of course we are very early on the streaming hype curve – probably still at the beginning of the early-adopter stage, or possibly even late innovator stage. Some of the technology in this space is usable (the Confluent gang’s vision for Apache Kafka and the Streams DSL is really compelling), but of course with any new technology your mileage may vary, and so it will always be a trade-off whether you will net benefit from adoption within your enterprise. The one thing I can tell you is that Synthesis as a technology company (and as a skills incubator) will continue to challenge the official future and is all-in on this architecture. We’re ready to be there to support our customers, our partners and the industry as this architecture moves up into the early majority.

Thanks for reading, and stream big!