An Introduction to Compute at the Edge
Edge Computing refers to computation-intensive tasks executed at and near data sources and end users with two primary motivating factors. Massive data generated by IoT devices and smartphones should be processed near the source when feasible to avoid wasted bandwidth sending it to the Cloud. Many applications, including emergency medical services, for example, demand the fastest possible intelligence result and can achieve it on the Edge. The Edge is metaphorical, of course, since there is no physical edge to a multidimensional network. And the computing is strategically located for speed and efficiency according to the application requirements. We will explore a number of use cases for Edge Computing and discover how event streaming is an important component in machine learning and other apps which are done near the data source.
Rising Tide of IoT Data vs. the Costly Cloud
Cisco estimates that 50 billion IoT devices are now connected and streaming Big Data to diverse applications in the Cloud, when many such applications would work better in the Edge Computing context. The trend is undeniable: by 2021 850 Zettabytes will be generated outside the Cloud, while the Cloud will handle only 20 ZB. The difference suggests that engineers increasingly prefer Edge computing architecture for smarter applications. Indeed, Cloud infrastructure capacity, it is estimated, may not be sufficient to handle the unprecedented tide of IoT data.
In addition to Cloud limitations, many applications demand the fastest possible speed to decision time. Autonomous vehicles are the epitome here, ideally requiring a mix of 5G mobile to access Google real-time traffic data and onboard GPU arrays to do computer vision algorithms quickly with emerging traffic and pedestrian imagery. As little as possible of the massive data from vehicle sensors should be analyzed with Cloud computing in this use case. A precise balance of Edge and Cloud is needed for safety and reliability when the robot is released into the complex streets of San Francisco. Let’s look at three more of the most important categories of use cases for Edge Computing.
Manufacturing & Predictive Maintenance
Edge Computing is the natural engineering reply to the exponential growth of Big Data flowing from IoT devices – a level of growth which the Cloud cannot process with ideal speed and efficiency. Predictive maintenance in manufacturing is a best practice use case for Edge computing. The goal is to predict machine failure and adjust controls, dispatch engineers, and replace parts prior to failure. IoT sensors in factory equipment generate voluminous raw data from which immediately urgent insights can be calculated when ML & AI methods are applied on local Edge servers. Eliminating the latency of Cloud computing can thus prevent expensive downtime and production delays. Edge Computing in the manufacturing context filters raw data intelligently to determine which data is relevant to Cloud apps for long range business intelligence analytics as compared with anticipated results which are required immediately. Additional manufacturing Edge Computing use cases benefit:
- Adaptive manufacturing methods
- Condition-based monitoring
- Event streaming augmented reality
Finance and Banking Use Cases
Edge Computing at a bank branch makes possible increasing levels of personalized customer service. Edge based facial recognition algorithms fed by IoT cams in the waiting area and at teller windows prompt tellers about a client’s needs, update signage with bank product info relevant to that customer’s predicted needs, to name a few of the myriad possibilities. On the back end of financial services, the reduced latency enables:
- Improved fraud detection
- Faster credit risk assessment
- Instant account and portfolio analysis
- Operation scaling
- Customer behavior analysis
Financial institutions are deeply concerned with customer data security and regulatory compliance. Edge Computing benefits these and other security issues by retaining more sensitive data in-house instead of piping out to Cloud apps.
Edge Computing Transforms Telecomm
As 5G prepares to facilitate the swing of Big Data from the Cloud to Edge Computing local servers, the Telecomm industry is adopting ingenious Edge Computing use cases of its own.
Telecomm enterprises are now installing micro data centers at cell tower sites to manage routing and customer activity insights in real time. Cell towers are a natural point of intelligent data accrual. The strategy is useful in solving the last mile problem, in which the last leg of a data packet’s journey through the network takes the longest. As telecomm companies must physically manage the data dumps from IoT devices it turns out paradoxically that Edge Computing solves problems caused by Edge Computing!
Ingenuity on the Edge
Event Streaming and Edge Computing
An important component of Edge Computing is the data pipeline which connects IoT devices to Edge Computing server clusters. Event streaming of live transaction data from an ATM machine to a local Edge Computing server is often achieved with Apache Pulsar, the best-in-class PubSub messaging solution. Pulsar is Cloud native and Edge Native, engineered for zero latency and zero data loss. Considering the complexity of multiplexing Cloud and Edge, many engineers now prefer a hosted Apache Pulsar solution with its own built-in neural network such as Pandio. For most Edge computing applications, Pandio engineers have unparalleled experience adapting event streams to machine learning applications.
Decentralizing: From Cloud to Foggy Edge
Edge Computing is sometimes called Fog Computing, to extend the Cloud analogy. Edge Computing methods are rapidly gaining popularity as a refinement on Cloud methods. The three primary advantages of strategically combining Cloud and Edge are:
- Alleviation of expensive Cloud pay-per-use data bandwidth. Local distributed edge computing nodes more quickly handle locally urgent computation tasks without transmitting raw data to Cloud clusters.
- Reduced Latency for locally relevant business intelligence, actionable insights, and service responses when raw data is processed at or near the source.
- Cloud-Edge balance: applications are mutually reinforcing when an accurate balance is achieved between Cloud and Edge apps.
The implication of Edge computing is that many apps can efficiently do ML at the data source and send only the results needed to the Cloud. In other words, massive data should be processed at the source when possible instead of sending it to the Cloud. Edge computing is motivated by speed and efficiency. For example, IoT in medical monitoring equipment can alert emergency personnel directly without prior Cloud processing and can be accomplished on the user’s machine in order to solve the problem of sending vast data to the Cloud. By leveraging edge computing, enterprises will be able to solve problems faster, drive down computing costs, and be even more effective with the massive amounts of data they are collecting every day.