Auflistung nach Schlagwort "latency"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragDispLagBox: simple and replicable high-precision measurements of display latency(Mensch und Computer 2020 - Tagungsband, 2020) Stadler, Patrick; Schmid, Andreas; Wimmer, RaphaelThe latency of a computing system affects users' performance. One important component of end-to-end latency is display lag - the time required to turn framebuffer contents into photons emitted by a computer screen. However, there is no well-documented and widely available method for measuring display lag. Thus, the effect of display lag is rarely considered in scientific studies and system development. We developed DispLagBox, a simple open-source device for measuring display lag. It supports the International Display Measurements Standard but also offers additional metrics for characterizing display lag with a resolution of 0.1 ms. The device, based on a Raspberry Pi computer, measures the time between VSYNC and a change in brightness on the connected display. Repeated measurements can be conducted automatically, so that not only average latency but also latency distributions for each device can be reported. For most displays we tested, DispLagBox reports latencies that are close to those reported by a commercial black-box measurement device. Typically, the difference is 1 - 3 ms.
- KonferenzbeitragPredicting Scaling Efficiency of Distributed Stream Processing Systems via Task Level Performance Simulation(Softwaretechnik-Trends Band 43, Heft 1, 2023) Rank, Johannes; Barnert, Maximilian; Hein, Andreas; Krcmar, HelmutStream processing systems (SPS) are a special class of Big Data systems that firms employ in (near) real time business scenarios. They ensure low-latency processing through a high degree of parallelization and elasticity. However, firms often do not know which scaling direction: horizontally, vertically, or mixed, is the best strategy in terms of CPU performance to scale those systems. Especially in cloud deployments with a pay-per-use model and cluster sizes that can span dozens of cores and machines, firms would profit from more accurate measurement-based approaches. In this paper, we show how to predict the CPU consumption of Apache Flink for different scaling scenarios using the Palladio Component Model. Our approach models the individual streaming tasks that make up the application and parametrizes it with fine grained CPU metrics obtained by combining BPF pro filing and querying the CPU’s performance measurement unit. Through this “task-level model approach”, we can achieve highly accurate predictions, despite using a simple model and only requiring a few mea surements for parametrization. Our experiment also shows that we achieve more accurate results than an alternative approach based on regression analysis.