Latency-aware placement of stream processing operators in modern-day stream processing frameworks

Raphael Ecker, Vasileios Karagiannis, Michael Sober, Stefan Schulte

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

Abstract

The rise of the Internet of Things has substantially increased the number of interconnected devices at the edge of the network. As a result, a large number of computations are now distributed in the compute continuum, spanning from the edge to the cloud, generating vast amounts of data. Stream processing is typically employed to process this data in near real-time due to its efficiency in handling continuous streams of information in a scalable manner. However, many stream processing approaches do not consider the underlying network devices of the compute continuum as candidate resources for processing data. Moreover, many existing works do not consider the incurred network latency of performing computations on multiple devices in a distributed way. To avoid this, we formulate an optimization problem for utilizing the complete compute continuum resources and design heuristics to solve this problem efficiently. Furthermore, we integrate our heuristics into Apache Storm and perform experiments that show latency- and throughput-related benefits
compared to alternatives.
OriginalspracheEnglisch
Aufsatznummer105041
Seiten (von - bis)1-16
Seitenumfang16
FachzeitschriftJournal of Parallel and Distributed Computing
Volume199
DOIs
PublikationsstatusVeröffentlicht - Mai 2025

Research Field

  • Sustainable & Resilient Society

Fingerprint

Untersuchen Sie die Forschungsthemen von „Latency-aware placement of stream processing operators in modern-day stream processing frameworks“. Zusammen bilden sie einen einzigartigen Fingerprint.

Diese Publikation zitieren