Latency-aware Placement of Stream Processing Operators

Raphael Ecker, Vasileios Karagiannis, Michael Sober (Vortragende:r), Elmira Ebrahimi, Stefan Schulte

Publikation: Beitrag in Buch oder TagungsbandVortrag mit Beitrag in TagungsbandBegutachtung

15 Downloads (Pure)


The rise of the Internet of Things and Fog computing has increased substantially the number of interconnected devices at the edge of the network. As a result, a large amount of computations is now performed in the fog generating vast amounts of data. To process this data in near real time, stream processing is typically employed due to its efficiency in handling continuous streams of information in a scalable manner. However, most stream processing approaches do not consider the underlying network devices as candidate resources for processing data. Moreover, many existing works do not take into account the incurred network latency of performing computations on multiple devices in a distributed way. Consequently, the fog computing resources may not be fully exploited by existing stream processing approaches. To avoid this, we formulate an optimization problem for utilizing the existing fog resources, and we design heuristics for solving this problem efficiently. Furthermore, we integrate our heuristics into Apache Storm, and we perform experiments that show latency-related benefits compared to alternatives.
TitelInternational Workshop on Scalable Compute Continuum (WSCC 2023)
Herausgeber (Verlag)Springer
PublikationsstatusAngenommen/Im Druck - 2023
VeranstaltungInternational Workshop on Scalable Compute Continuum (WSCC 2023) - Limassol, Cyprus, Cyprus, Griechenland
Dauer: 29 Aug. 202329 Aug. 2023


WorkshopInternational Workshop on Scalable Compute Continuum (WSCC 2023)

Research Field

  • Enabling Digital Technologies


Untersuchen Sie die Forschungsthemen von „Latency-aware Placement of Stream Processing Operators“. Zusammen bilden sie einen einzigartigen Fingerprint.

Diese Publikation zitieren