Now that we’re taking things to the edge, an emerging approach knocked on the door of the cloud-only model and just for the edge.
Edge computing is getting a lot of attention nowadays and for good reason. The cloud architecture requires that some processing be located closest to the point of data consumption. Think of computer systems in your automotive, industrial robots, and now fully connected mini-clouds like Microsoft’s Stack and AWS’s Outpost, which are definitely all examples of edge computing.
The architectural approach to edge computing — and IoT (Internet of Things), for that matter — is the creation of marginal copies of computing in public clouds. You can treat this as a copy of what exists on a device or edge computing platform, allowing you to synchronize changes and manage configurations on “edges” centrally.
The problem with this model is that it’s static. Processing and data are closely associated with the public cloud or an edge platform. There is usually no movement of those processes and data warehouses, although data is transmitted and received. This is classical dispersed architecture.
The trouble with the classic approach is that sometimes the processing requirements and the I/O load expand 10 times the normal load. Edge devices are often not strong enough, as their tasks are clearly defined, and edge applications are created to match the number of resources on edge devices or platforms. However, as edge devices become more popular, we will need to expand the load on these devices or they will regularly reach the or higher limit that they cannot handle.
The answer is the movement of processing and storing data from edge-to-edge devices to the public cloud. Considering that a copy is already on the public cloud provider, that would be less problematic. You’ll need to start synchronizing data as well as apps and configurations, so any movement, one can replace the other (activity/activity).
The idea here is to keep things as simple as possible. Assuming that the edge device does not have the processing power required for a particular use case, the processing switches from edge to cloud. Here, the number of CPUs and storage resources is almost unlimited, and the processing will be able to scale, then return the processing to the edge device with synchronized, updated data.
Some ask reasonable questions about just keeping processing and data in the cloud and not bothering to edge devices. Edge is an architectural template that is still needed, with data processing and storage located closest to the original point. Dynamic dispersion promotes centrally processed processes as needed, dynamically. It provides architectural advantages that allow scalability without losing the necessary edge functionality.
A little something to add to your cloud architecture trick bag.