Depending on your perspective, CES either feels like it took place eons ago or it’s fresh in your memory. Personally, it seems pretty recent. I’m still seeing reviews of the show in the media. I haven’t finished logging all of my CES meetings in Salesforce. And, we’ve got an extensive, survey-based report coming up that, while not informed by CES, bears witness to a lot of what we saw there.
So, what’s the report about? Drones? Autonomous transport? Robots? The future of self-cleaning cat toilets? Nope, none of the above. It’s about Edge computing. That’s right, call it what you will, Edge cloud; Distributed Compute; Distributed Edge Cloud, there was plenty of Edge-related stuff going on in Las Vegas earlier this month. It might not always have been positioned as such, but it was there if you were looking.
If you’re not familiar with the Distributed Edge Cloud concept, it’s fairly straightforward and very powerful. At its simplest, it’s about siting compute and applications (including network functions) closer to the user. Doing so positions it as a way to deliver low latency communications where a specific use case requires them. And where edge nodes are able to host workloads from various players, the concept opens up opportunities to expose applications in the way public cloud players do, all while lowering backhaul burdens. Of course, it’s also positioned as a space where operators and public cloud players will battle to deliver value to the enterprise.
So, what did we learn in Vegas?
The Edge comes in many shapes and sizes
If you compare U2’s lead guitarist (The Edge) circa the release of albums The Joshua Tree, Achtung Baby and Songs of Experience, you’ll see he’s changed over the years, but we always know who he is.
The same can’t be said for Distributed Edge Cloud. Talking to people (operators and vendors) for our report, it was clear that everyone had a different definition of where the edge of the network was. Within an operator’s network. In the enterprise. In a user device. This definitional tension is about more than just semantics: it sets out issues of ownership and monetisation (in other words, who will benefit). It was also front and centre at CES with lots of different vendors positioning themselves as edge players, whether than means delivering home gateways, IoT gateways or high-end phones.
My favourite: an instantiation of Amazon’s Greengrass (extending AWS to edge devices) on a robot, putting the “mobile” into mobile edge computing.
It’s not just about nodes
Given its presence in smartphones, IoT and computing devices of all sorts, Arm was omnipresent at CES. But, rather than connect around something sexy like drones, wearables or artificial intelligence (AI), I took time to catch up with it about Fog Computing. While sometimes used interchangeably with Edge Computing, the two are not the same.
As former OpenFog Consortium chairman Helder Antunes put it: “Fog computing is an end-to-end horizontal architecture that distributes computing, storage, control, and networking functions closer to users along the cloud-to-thing continuum.”
Key here is the concept of an end-to-end “architecture.” We can quibble over the differences between Edge and Fog, but there’s an important reminder here that placing compute closer to users involves more than just nodes. It requires sites for those nodes, applications to run on them and management systems to get those applications deployed. Again, this is more than just semantics: different participants in the edge ecosystem will deliver different components. Where our study, for example, saw operators deploying the majority of nodes, it suggested webscale companies as deploying the majority of workloads.
It’s not just about enterprise
Part of the massive buzz around edge computing is the potential it holds for helping operators (and others) enable the digital enterprise: nearly 45 per cent of the operators we surveyed saw the enterprise sector as generating the most value from distributed edge clouds. CES, however, highlighted a clear role for supporting consumers. To be fair, commonly cited use cases like AR/VR, connected car, and mobile gaming all imply a consumer component. The same, however, holds for home IoT gateways, which do more than facilitate sensor connectivity. None of this is a revelation, but with a tight focus on the enterprise (and new operator revenues), it’s important to recall the consumer value proposition.
It’s not just about latency
The top business driver for Edge computing per our survey of operators and vendors? Application latency.
On a scale of one to five (the latter being “extremely important”) the overall rating was 4.2. This aligns well with a focus on edge computing support for use cases including critical communications and AR/VR, and it makes sense when considering the second most-cited business driver, user experience.
The problem? Latency often captures all of the attention, detracting from everything else we want to accomplish with edge computing. Operators, for example, will be looking at more than an improved user experience thanks to latency improvements. They’ll be looking for transport efficiencies and potentially regulatory compliance around how and where data is handled. And, on the user experience front, low latency is only one part of the story. Think about all of the low-power IoT devices launched at CES: if forced to do lots of processing, battery life and application performance will be compromised. But if that processing can be pushed up to (offloaded to) an edge node, app performance and battery life should benefit.
It’s not just about operators
For many people, the distributed edge cloud concept is inextricably linked to operators: mobile operators, in particular.
Long before 5G networks got trialled, we talked about Mobile Edge Compute, which was born from early efforts at integrating compute with RAN platforms. That evolved into something more holistic (multi-access edge compute), but the mobile and operator bias remains. 5G, for example, dominated as the chosen “most relevant” access technology in our survey, with Wi-Fi and fixed options bringing up the rear. Meanwhile, almost none of the operators we surveyed expect webscale players to generate the greatest economic value from edge compute.
Cue Baidu: in tandem with CES, the Chinese behemoth announced OpenEdge, an open-source edge compute platform, highlighting edge as a critical component of its AI, Big Data, and Cloud strategy.
Whether or not we need another edge platform (open source or otherwise) isn’t the point. That cloud players are actively targeting the edge and putting development (and outreach) efforts behind it, is.
– Peter Jarich – head of GSMA Intelligence
The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members.Subscribe to our daily newsletter