In any IIoT, edge computing occurs wherever it needs to, and the business itself determines when and where computing happens. That was the conclusion of the Industrial Internet Consortium’s (IIC) report.
Where edge computing happens depends on the amount of data, security needs, reasons for data processing, and the deadlines.
In addition to its versatility, edge computing also promises to reduce the amount of latency, i.e., the time it takes for a reaction to return upon sending data.
You should not underestimate edge computing, because many analysts point out the fact that it will be a big deal very soon.
The disagreements about the definition of edge computing come precisely because it can occur almost anywhere, and depending on what capabilities you require and the result you’re trying to achieve, the placement of the edge can differ for everyone.
The Difficulties in Choosing Locations for Edge Installations
Any large volumes of data that need to be processed should go to a data center, and most applications don’t have a problem with latency issues. The real problem comes from security and privacy. That’s especially the case when third parties are involved in monitoring the data or the IoT devices themselves. Many companies archive the data because they don’t expect that there will be a good enough result after the analysis.
In essence, you can do the processing at the edge or in a data center, but doing it in a data center also means that a lot of energy will be used. The concern is that this is unnecessary because most of the data end up unused.
Edge Network Infrastructures in the Future
These infrastructures are still merely a theory. However, there are already those looking to find the best distribution of effort when handling the problems that have large amounts of data.
At the moment, there are two different architectures for AI, and they are the ways to deal with all the data. Training and analysis will be done at the core and inference at the edge.
All in all, we are not there yet, but we are approaching the point when client hardware will have more intelligence and will thus be more useful.
Most of the issues are still in the future, and at the moment, decisions are made based on only three distinct things: connectivity, latency, and cost.
The IIC workgroup has approached edge network infrastructure architecture as a real engineering issue. That has helped them determine that the overall goal should be to meet several operational goals in timing and function, by using a certain amount of resources and different locations for edge networks.
In the end, no proper definition for edge computing was created, despite the efforts. However, IIC did manage to define the edge in another way – from what it can accomplish for problems in a particular use case, and what fundamental capabilities might be needed. Also, the report shows, what works for each specific situation, should be what is used.