Edge Computing vs. Fog: What’s the Difference?

  • by

So far, we have familiarized ourselves with cloud computing and their taxonomies. We have seen many providers and their services towards cloud computing. Today, in this article, we’re going to discuss two more important terms – edge computing and fog computing. We will try to understand their definitions and necessity. During these course, we will also discuss their basic differences. So, lets start.

Trends on IoT Credit: Google.com

The use of IoT devices has increased significantly over the past few years. A report on Statista reveals that we can expect over 75 billion IoT devices to be active by 2025. The fundamental objective of the internet of things (IoT) is to obtain and analyze data from assets that were previously disconnected from most data processing tools. This is important to improve the customer experience and collect data in new and inventive ways. With the explosion of data, devices and interactions, cloud architecture on its own can’t handle the influx of information.

While the cloud gives us access to compute, storage and even connectivity that we can access easily and cost-effectively, these centralized resources can create delays and performance issues for devices and the data that are far from a centralized public cloud or data center. Edge computing and fog computing are two potential solutions to that issue. They both share similar objectives:

  1. Reduce the amount of data sent to the cloud
  2. Decrease network and internet latency
  3. Improve system response time in remote mission-critical applications.

Edge Computing and Fog Computing

Edge computing often termed as ‘Edges’, processes data away from centralized storage, keeping information on the local parts of the network i.e. edge devices. When the data is sent to the edge device, it can be processed directly on it, without being sent to the centralized cloud. It pushes the intelligence, processing power, and communication capabilities of an edge gateway or appliance directly into devices like PLCs (programmable logic controllers), PACs (programmable automation controllers), and especially EPICs (edge programmable industrial controllers). It simplifies the communication chain and reduces potential points of failure. As a result, it saves time and money what is a key to the success of IoT applications.

Fog computing, on the other hand, pushes intelligence down to the local area network (LAN) of network architecture, processing data in a fog node or IoT gateway. It is basically a standard that defines how edge computing should work, and it facilitates the operation of compute, storage and networking services between end devices and cloud computing data centers. Additionally, many use fog as a jumping-off point for edge computing. It transports the data from device to the cloud in 3 different steps –

  1. First the electrical signals from device are traditionally wired to the I/O points of PLC or PAC. The automation controller executes a control system program to automate the things.
  2. Next the data from the control system program is sent to an OPC server or protocol gateway, which converts the data into a protocol Internet systems understand, such as MQTT or HTTP.
  3. Then the data is sent to a fog node or IoT gateway on the LAN, which collects the data and performs higher-level processing and analysis. This system filters, analyzes, processes, and may even store the data for transmission to the cloud or WAN at a later date.

Difference Between Fog And Edge Computing

Edge & Fog computing technologies differ by their design and purpose but often complement each other. Cisco coined the term Fog computing. It defines a mix of a traditional centralized data storage system and Cloud.

Computing is performed at local networks, although servers themselves are decentralized. The data, therefore, can be accessed offline because some portions of it are stored locally as well. Both technologies can help the organizations to reduce their reliance on cloud-based platforms to analyze data, which often leads to latency issues, and instead be able to make data-driven decisions faster. The main difference between edge computing and fog computing comes down to where the processing of that data takes place.

How do fog and edge computing work?

Edge and fog computing architecture is all about Internet of Things (IoT). It manifests the real world where we deal with remote sensors or devices. Let’s focus on a nice case study reported in a Cisco article.

Consider Bombadier, an aerospace company, which in 2016 opted to use sensors in its aircraft. That move offered an opportunity to generate more revenue by giving Bombadier real-time performance data on its engines so it can address problems proactively without grounding its aircraft to fix an issue.

The ability to place processing at the edge next to a jet engine sensor has a real impact: One can instantly determine the status of the jet engine. This eliminates the need to send engine sensor data back to a central server, either on the plane or in the cloud to determine more pressing tactical issues, such as if the engine is overheating or burning too lean.

There are innovative things to do with that jet engine data that should not typically take place at the edge. Consider predictive analytics to determine whether the engine is about to fail based on sensor data gathered over the past month. Or, data analysis might involve root-cause analysis, such as determining why an engine has overheated rather than just indicating it’s overheating. These strategic processes are better placed at centralized servers that can store and process petabytes of data, such as a public cloud.

Cited by David Linthicum

Benefits of Both Computing

Edge computing maintains all data and processing on the device that initially created it. This keeps the data discrete and contained within the source of truth, the originating device.

  1. No delays in data processing. The data stays on the “edges” of the IoT network and can be acted on immediately.
  2. Real-time data analysis.
  3. Low network traffic. The data is first processed locally, and only then sent to the main storage.
  4. Reduced operating costs. Data management takes less time and computing power because the operation has a single destination, instead of circling from the center to local drives.

The Role in IoT

Edge computing is an evolution that has potential to move computation and storage far closer to the end-point. So, an electricity smart meter or CCTV system is able to run without having a continual connection to the internet.

According to Gartner, 50 percent of big enterprises will deploy at least six edge computing use cases by 2023, compared to just one percent in 2019. That massive rise in exploration, followed by implementation, will lead to a surge in the amount of data collected. The three fast growing segments could be – Utilities, physical security, and automotivein 2020.

Thankfully, edge computing also helps in that regard, as it is able to analyze and filter raw data sets and send only valuable information back to a data center.

That should keep network costs lower than what they would have been while maintaining data quality or scope. The value gets compounded with the use of AI. Machine Learning models improves the filter quality.

Edge computing reduces the security risk of IoT data being intercepted, especially if all of the raw data is moved directly to the data center.

This may also make many use cases financially unfeasible, as a business would have to pay the major cloud providers significant amounts to house raw data from millions of IoT devices.