ACM DL

ACM Transactions on

Modeling and Performance Evaluation of Computing Systems (TOMPECS)

Menu
Latest Articles

Disk Prefetching Mechanisms for Increasing HTTP Streaming Video Server Throughput

An Experimental Performance Evaluation of Autoscalers for Complex Workflows

RAPL in Action: Experiences in Using RAPL for Power Measurements

NEWS

About TOMPECS

ACM Transactions on Modeling and Performance Evaluation of Computing Systems (ToMPECS) is a new ACM journal that publishes refereed articles on all aspects of the modeling, analysis, and performance evaluation of computing and communication systems.

The target areas for the application of these performance evaluation methodologies are broad, and include traditional areas such as computer networks, computer systems, storage systems, telecommunication networks, and Web-based systems, as well as new areas such as data centers, green computing/communications, energy grid networks, and on-line social networks.

Issues of the journal will be published on a quarterly basis, appearing both in print form and in the ACM Digital Library. The first issue will likely be released in late 2015 or early 2016.

READ MORE
Forthcoming Articles
CloudHeat: An Efficient Online Market Mechanism for Datacenter Heat Harvesting

Datacenters are major energy consumers and dissipate an enormous amount of waste heat. Simple outdoor discharging of datacenter heat is energy-consuming and environmental unfriendly. By harvesting datacenter waste heat and selling to the district heating system (DHS), both energy cost compensation and environment protection can be achieved. To realize such benefits in practice, an efficient market mechanism is required to incentivize the participation of datacenters. This work proposes CloudHeat, an online reverse auction mechanism for the DHS to solicit heat bids from datacenters. To minimize long-term social operational cost of the DHS and the datacenters, we apply a randomized fixed horizon control (RFHC) approach for decomposing the long-term problem into a series of one-round auctions, guaranteeing a small loss in competitive ratio. The one-round optimization is still NP-hard, and we employ a randomized auction framework to simultaneously guarantee truthfulness, polynomial running time, and an approximation ratio of 2. The performance of CloudHeat is validated through theoretical analysis and trace-driven simulation studies.

GPSonflow: Geographic Positioning of Storage for Optimal Nice Flow

This paper evaluates the maximum data flow from a sender to a receiver via the internet when all transmissions are scheduled for early morning hours. The significance of early morning hours is that internet congestion is low while users sleep. When the sender and receiver lie in proximal time zones, a direct transmission from sender to receiver can be scheduled for early morning hours. When the sender and receiver are separated by several time zones such that their sleep times are non-overlapping, data can still be transmitted during early morning hours with an indirect store-and-forward transfer. The data are transmitted from the sender to intermediate end networks (or data centers) that serve as storage hops en route to receiver. The storage hops are placed in zones that are time proximal to the sender or the receiver so that all transmissions to-and-from storage hops occur during low congestion early morning hours. This paper finds the optimal locations and bandwidth distributions of storage hops for maximum nice internet flow from a sender to a receiver.

Efficiency and Optimality of Largest Deficit First Prioritization: Dynamic User Prioritization for Soft Real-Time Applications

An increasing number of real-time applications with compute and/or communication deadlines are being supported on shared infrastructure. Such applications can often tolerate occasional deadline violations without substantially impacting their Quality of Service (QoS). A fundamental problem in such systems is deciding how to allocate shared resources so as to meet applications' QoS requirements. A simple framework to address this problem is to, (1) dynamically prioritize users as a possibly complex function of their deficits (difference of achieved vs required QoS), and (2) allocate resources so to expedite users with higher priority. This paper focuses on a general class of systems using such priority-based resource allocation. We first characterize the set of feasible QoS requirements and show the optimality of max weight-like prioritization. We then consider simple weighted Largest Deficit First (w-LDF) prioritization policies, where users with higher weighted QoS deficits are given higher priority. The paper gives an inner bound for the feasible set under w-LDF policies, and, under an additional monotonicity assumption, characterizes its geometry leading to a sufficient condition for optimality. Additional insights on the efficiency ratio of w-LDF policies, the optimality of hierarchical-LDF and characterization of clustering of failures are also discussed.

Searching for a Single Community in a Graph

In standard graph clustering/community detection, one is interested in partitioning the graph into more densely connected subsets of nodes. In contrast, the search problem of this paper aims to only find the nodes in a single such community, the target, out of the many communities that may exist. To do so , we are given suitable side information about the target; for example, a very small number of nodes from the target are labeled as such. We consider a general yet simple notion of side information: all nodes are assumed to have random weights, with nodes in the target having higher weights on average. Given these weights and the graph, we develop a variant of the method of moments that identifies nodes in the target more reliably, and with lower computation, than generic community detection methods that do not use side information and partition the entire graph. Our empirical results show significant gains in runtime, and also gains in accuracy over other graph clustering algorithms.

System and Architecture Level Characterization of Big Data Applications on Big and Little Core Server Architectures

The rapid growth in the data yields challenges to process data efficiently using current high performance server architectures such as big Xeon cores. Furthermore, physical design constraints, such as power and density, have become the dominant limiting factor for scaling out servers. Low-power embedded cores in servers such as little Atom have emerged as a promising solution to enhance energy-efficiency to address these challenges. Therefore, the question of whether to process the big data applications on big Xeon or Little Atom based servers becomes important. In this work, through methodical investigation of power and performance measurements, and comprehensive applicationlevel, system-level and micro-architectural level analysis, we characterize dominant big data applications on big Xeon and little Atom-based server architectures. The characterization results across a wide range of real-world big data applications and various software stacks demonstrate how the choice of big vs little core-based server for energyefficiency is significantly influenced by the size of data, performance constraints, and presence of accelerator. In addition, we analyze processor resource utilization of this important class of applications such as memory footprints, CPU utilization and disk bandwidth to understand their run-time behavior. Furthermore, we perform micro-architecture level analysis to highlight where improvement is needed in big and little core microarchitectures to address their performance bottlenecks.

All ACM Journals | See Full Journal Index

Search TOMPECS
enter search term and/or author name