Due to the proliferation of edge computing, cloud providers have started offering compute nodes at the edge of the network in addition to traditional compute nodes in data centers. So far, various systems have been proposed for processing Internet of Things (IoT) data on both edge and cloud compute nodes in order to reduce the communication latency. However, such systems do not typically consider that the network bandwidth between an edge node and a cloud node can be orders of magnitude higher than the bandwidth between an IoT device and a cloud node. As a result, the IoT data are commonly sent selectively to either edge or cloud nodes disregarding alternative network paths through edge nodes, which may have higher network bandwidth, and lower communication latency. To avoid this, in this paper we analyze the latency of sending data to edge and cloud compute nodes of cloud providers. Based on this analysis, we propose edgeRouting which routes the data through the closest edge compute node. By doing that, edgeRouting exploits both the low propagation delay of nodes at the edge, and the high bandwidth among edge and cloud compute nodes of cloud providers. To evaluate our approach, we perform experiments on a real-world setup with nearby and remote compute nodes of a cloud provider, and we show that edgeRouting reduces the communication latency by up to 55% compared to alternative methods.
- Enabling Digital Technologies