Data Movement: Most Used Messaging Patterns and Why They are so Important

Systems engineers always have to take data movement into account when designing systems of any scale. In fact, data movement remains a primer on most used messaging patterns, including pub/sub, event streaming, and message queuing. 

Currently, in all markets, all business decisions are data-driven. However, one problem remains – which system to use to connect the sources and query the data, especially at scale.  Today, we will take a closer look at data movement and why messaging patterns are so important. 

Data Movement Enables Digital Businesses

In the past, only a handful of large enterprises used cutting-edge technology to extend internal capabilities, create new value in their model, and improve customer experiences. However, the situation today is quite different. It is hard to imagine any business not using some kind of tech stack to do everything from the above. To some extent, all SMBs and enterprises can be considered digital businesses. 

There is one more thing that changed. Businesses and their customers/users started to generate copious amounts of data. Physical data movement was no longer feasible. It became the major challenge of modern data movement initiatives. Engineers began looking into this and came up with two possible solutions – build and use robust database systems or move from batch to real-time workflows

Moving from batch to real-time workflows reduces or, in some instances, completely removes the need to move data. This brings us to distributed data movement and the three most used messaging patterns. Let’s quickly review how each of the patterns was developed, the best use cases, and their pros and cons.


The publish-subscribe messaging pattern is really easy to understand. Senders are called publishers. The messages in this pattern are not programmed to be sent directly to receivers called subscribers. The system categorizes published messages into classes. It does it without even knowing if or how many subscribers exist. 

A subscriber needs to express interest in one or a couple of available classes to access a message. Subscribers can access messages of their interest without getting any information about publishers, including who they are and whether they exist at all.

Pub/Sub systems also feature message filtering, enabling subscribers to receive specific messages. The system supports content-based and topic-based filtering. In content-based filtering, a subscriber defines filters and receives only content with matching attributes. In topic-based filtering, a publisher defines the topic to which subscribers can have access. 

The system architecture is also simple. The messages are posted to the event bus or message broker. Once a subscriber registers with the broker, the broker handles filtering. A broker can also prioritize messages in the queue. The first pub/sub system was the news subsystem for the Isis Toolkit, launched in 1987. 


The pub/sub pattern offered valuable benefits to the data movement initiative. Here are the two most noteworthy ones:

  • Scalability – traditional client-server patterns had a lot of limitations, especially when it comes to scalability. Pub/sub pattern resolved the scalability issue by introducing network- or tree-based routing, message caching, and parallel operation. Over time the pattern proved to be less scalable than expected. In enterprise support environments with data centers consisting of thousands of servers pub/sub caused high latency, and there were no delivery guarantees;
  • Loose coupling – loose coupling refers to publishers and subscribers being loosely coupled. They operate independently, with the focus being on the topics. This was a huge step forward given that in the client-server patterns, clients were unable to post messages if the relevant server process wasn’t running. The server was unable to receive messages if the client wasn’t there. 


With more and more enterprises depending on the pub/sub messaging pattern, a couple of downsides emerged:

  • There is no delivery guarantee – the broker in a pub/sub system is in charge of delivering messages. It can be configured only to deliver messages for some time. The broker will stop delivering messages when the time expires, whether or not it has received confirmation that the subscriber received the message. It made pub/sub pattern unusable in applications when the delivery guarantee was a requirement;
  • No guarantee that a subscriber is listening – in applications when a subscriber has to receive messages, such as when failures and errors need to be logged, the publisher will deliver messages even if the subscriber is not listening. This downside is a result of the publisher/subscriber decoupling.
  • Instability and slowdowns with load surges – while the pub/sub messaging pattern works perfectly with small work batches, it starts to show instability when the workload increases, especially with load surges. Load surges occur when subscribers make too many requests. Slowdowns refer to a slow message volume flow to individual subscribers. It often happens when more applications use the same pub/sub system.

Message Queuing

Message queuing is the next popularly used messaging pattern. A message queue encompasses a queue of messages that two apps send to one another. A message in a queue is a work object that needs to be processed. It can be a plain message, information about some other task, or an instruction that tells an app to execute a command. Being in a queue simply means that the message is waiting to be processed. 

In pub/sub architecture, we had publishers and subscribers. Here we have:

  • Producers – apps that create messages and deliver them to the message queue;
  • Consumers – apps that connect to the message queue and get the messages.

Message queuing supports both synchronous and asynchronous communication. The fact that it can be used for asynchronous communication is what makes it viable for current data movement initiatives. 

A producer can deliver a message to the message queue with no requirements for when to continue processing. A producer continues to execute tasks without waiting for a response. It implies that this messaging pattern also features decoupling topology. 


The main advantages of message queuing offers include:

  • Eliminates redundancy – with a message queue you can ensure that every message is consumed by only one consumer. It makes MQ perfect for work queues and task lists. Thanks to MQ, you can completely eliminate redundancy;
  • Support for high-volume consumption rates – MQ enables you to add several consumers to a topic and yet set it so that only a single consumer can process all messages. The settings allow you to consume messages really fast and efficiently;
  • Perfect when messages don’t have to be processed only one time and in no predefined order – MQ allows you to process messages however you see fit. The pattern really shines when your goal is to process all messages in the queue in no order;
  • Scalability – since producers and consumers are decoupled, you can easily maintain the system and scale it to reflect your needs.


When it comes to the MQ downsides, you should know the following:

  • One-to-one scenario – when a consumer receives a message from the queue, it is no longer available. If a consumer fails, the message is lost, and you need to roll back;
  • Increased operational complexity – message queues don’t come out of the box; you need to create, configure, and monitor it. The same goes for producers and consumers. It can cause operational complexity when you start scaling up. 

Event Streaming

Event streaming messaging pattern enables you to connect multiple and disparate live data sources. You can leverage an incredible amount of business data and create data pipelines to gain real-time insights thanks to event streaming. Event-driven architecture is increasingly becoming popular.

Event streaming enables event streaming processing. You will be able to take action on numerous data points across systems that generate data in real-time. In this messaging pattern, the event is a data point, and the stream is the continuous delivery of events. 

Data streams or streaming data is a series of events. You can take all sorts of actions on these events, including analytics, transformations, enrichment, ingestion, and aggregations. 

Event stream environments are especially beneficial when you need to take action on a constant flow of data. With data movement shifting from data at rest to data in motion, event streaming becomes a necessity. 


Event streaming offers the following unique benefits:

  • Ability to connect multiple data sources – businesses generate data on multiple points, and event streaming enables them to connect and use previously unused data;
  • Real-time insights – with event streaming, companies are able to process the data the moment it is generated. It provides real-time insights and creates a lot of opportunities for businesses to react;
  • Great fault tolerance – all services in event streaming are decoupled, meaning that faults are isolated to single sources and that communication can continue;
  • Enables modern data movement – data is generated everywhere around the clock. Even streaming enables organizations to query this data and use it to make informed business decisions.


  • Complexity – setting up and managing event streaming architecture can be challenging for some teams;
  • Error handling – with hundreds of services and message brokers, it can be challenging to identify the cause of errors and prevent them from occurring in the future.

All three popular massaging patterns come with their own specific upsides and downsides. What if we told you that there is a messaging platform that combines all three for the best-of-breed solution. Say hello to Apache Pulsar. Let’s quickly see why Pulsar is the only solution that brings the best of all three messaging topologies.

The Benefits of Using Apache Pulsar for Your Messaging Needs

Pulsar started as an internal project at Yahoo!. The company used it to connect its app stack seamlessly. Once they turned it into open-source software, Pulsar became a top-level Apache Software Foundation project. Today it is a cutting-edge cloud-native, distributed messaging and streaming platform. 

Reduced Operational Complexity

Handling big data, machine learning, and artificial intelligence doesn’t necessarily call for complex solutions. Pulsar offers a way to improve your data streaming and processing capabilities without having to make your operation too complex. 

Pulsar’s main advantage comes from keeping storage and computing architectures separate. You can extend the storage layer and service layer independently. It allows effortless scaling. Being cloud-native means that Pulsar can automatically downscale or upscale so that your system can adapt to load spikes.

The complexity of cluster upgrade and expansion is minimal, thanks to its hierarchical architecture. 

Lower Overall Costs

With Pulsar your costs will depend on how many nodes you use. Using less computing power and resources means that you will be able to bring your costs down. Pulsar was built with performance in mind, allowing it to deliver exceptional performance with minimal nodes.

Compared to some alternate solutions on the market, Pulsar is definitely the only one that can deliver lower overall consumption costs. In addition, operational costs tied to keeping systems up will go down as well. Pulsar stores data in pieces and distributes it evenly on the BookKeeper nodes in the storage layer. This way of handling data delivers high availability and increased performance.

Enterprise Prepared for Demands of Machine Learning

Pulsar’s unique architecture and features make it an optimal solution for machine learning initiatives. ML requires abundant storage capacity and processing power. However, the requirements are not set in stone, and they can change in real-time. Pulsar can help you future-proof your enterprise against the coming demands of machine learning

With its flexible scaling, Pulsar can enable even the most comprehensive ML solutions. The zero data loss, separate storage and broker layers, tiered storage (SSD and HDD-based storage), quorum-based replication algorithm for minimal latency, and multi-tenacity make for all the features that render Pulsar the best option on the market. 
While Pulsar is considered a mission-critical solution for ML projects and agile data movement, it is not something that you want to jump into if you don’t have an in-house team of developers experienced with distributed messaging and streaming platforms. This is where Pandio’s managed Pulsar services can help you get a competitive advantage. Feel free to talk to one of our Pulsar experts here at Pandio to learn more.

Leave a Reply