Cloud Native perspective in Edge Computing — Edge-to-cloud

Abhisht Joshi
3 min readJun 1, 2021


Photo by imgix on Unsplash

Edge computing is a frequently used term today in both the networking and cloud industry. At the core of its proposition, it offers faster computation capabilities with low latency(turn around time) for any connected device.

In order to deep-dive into the edge-to-cloud narrative, we need to understand some basic terminologies.

  • IoT device — Any connected device with capabilities of transmitting relevant and actionable information over the internet.
  • Edge/Access Networking Device — A networking switch or router that is used to establish first line of connection between the IoT device and the internet.
  • Aggregator Networking Device — A networking switch or router which, as the name suggests, aggregates the information from multiple edge routers in the form of a n-to-1 (edge-to-aggregator switch) mapping.
  • Core Networking Device — A switch or router which is used to establish network connectivity within a datacenter. It is also referred to as datacenter switch. These devices are the backbone to your datacenter connectivity and also establish further connections to public/private clouds.

From a high level view, if the above device types are connected, the flow of data traffic/information creates an edge-to-cloud framework. Simply put, this means that information from IoT devices follow the below flow:

IoT Devices → Edge Routers → Aggregator Routers → Core Switches/Data Center → Cloud Providers

As a general rule of thumb, on moving from bottom to up, computation capabilities increase and but latency also increases (higher response time/slow).

So what is Edge Computing?

Edge Computing is the process of performing major portion of the computations on the Edge devices in order to support faster response times. Traditional cloud computing suggests that massive processing and computations are performed on a centralized cloud which is highly scalable.

How will Edge computing impact the modern consumer?

Consider an example of driverless cars. A machine learning algorithm performs real-time analysis and computation, and finally takes autonomous decisions on the dynamic data gathered from connected autonomous cars in the periphery, to steer you from office to home. These operations require significant amount of real-time computations and very low milli-second level latency/response time. To support these operations, your 5G or Wifi6 enabled car will be connected to a network device i.e., an edge router which will perform operations like a mini-datacenter or mini-cloud with the intent to provide quick actionable insight/decision.

Similar applications can be found on the factory shop floor with automated machinery like the amazon’s fully automated distribution centers, where the robotic gear acts are the backbone to the robust last-mile delivery that Amazon Prime offers as its unique selling proposition.

The new shift to at the edge has created massive decentralized and distributed IT compute instances all at the Edge. To talk about decentralization, it is imperative to touch upon microservice architectures.
In the next article on this two part series, we’ll talk about Cloud Native architecture in the much touted edge-to-cloud rally.



Abhisht Joshi

a techie winging a business degree while exploring new habits ..!