ACM DL

ACM Transactions on

Modeling and Performance Evaluation of Computing Systems (TOMPECS)

Menu
Latest Articles

The Economics of the Cloud

This article proposes a model to study the interaction of price competition and congestion in the cloud computing marketplace. Specifically, we propose a three-tier market model that captures a marketplace with users purchasing services from Software-as-a-Service (SaaS) providers, which in turn purchase computing resources from either... (more)

Resource Auto-Scaling and Sparse Content Replication for Video Storage Systems

Many video-on-demand (VoD) providers are relying on public cloud providers for video storage, access, and streaming services. In this article, we... (more)

Behavioral Model of IEEE 802.15.4 Beacon-Enabled Mode Based on Colored Petri Net

The IEEE 802.15.4 standard is widely employed in power-constrained scenarios, such as Wireless Sensor Networks deployments. Therefore, modeling this... (more)

NEWS

About TOMPECS

ACM Transactions on Modeling and Performance Evaluation of Computing Systems (ToMPECS) is a new ACM journal that publishes refereed articles on all aspects of the modeling, analysis, and performance evaluation of computing and communication systems.

The target areas for the application of these performance evaluation methodologies are broad, and include traditional areas such as computer networks, computer systems, storage systems, telecommunication networks, and Web-based systems, as well as new areas such as data centers, green computing/communications, energy grid networks, and on-line social networks.

Issues of the journal will be published on a quarterly basis, appearing both in print form and in the ACM Digital Library. The first issue will likely be released in late 2015 or early 2016.

READ MORE
Forthcoming Articles
Mean-Field-Analysis of Coding versus Replication in Large Data Storage Systems

We study cloud-storage systems with a very large number of files stored in a very large number of servers. In such systems, files are either replicated or coded to ensure reliability, i.e., to guarantee file recovery from server failures. This redundancy in storage can further be exploited to improve system performance (mean file access delay) through appropriate load-balancing (routing) schemes. However, it is unclear whether coding or replication is better from a system performance perspective since the corresponding queueing analysis of such systems is, in general, quite difficult except for the trivial case when the system load asymptotically tends to zero. Here, we study the more difficult case where the system load is not asymptotically zero. Using the fact that the system size is large, we obtain a mean-field limit for the steady-state distribution of the number of file access requests waiting at each server. We then use the mean-field limit to show that, for a given storage capacity per file, coding strictly outperforms replication at all traffic loads while improving reliability. Further, the factor by which the performance improves in the heavy-traffic is at least as large as in the light-traffic case. Finally, we validate these results through extensive simulations.

List of Reviewers: Issue 2:4

Disk Prefetching Mechanisms for Increasing HTTP Streaming Video Server Throughput

Most video streaming traffic is delivered over HTTP using standard web servers. While traditional web server workloads consist of requests that are primarily for small files that can be serviced from the file system cache, HTTP video streaming workloads often service a long tail of large infrequently requested videos. As a result, optimizing disk accesses is critical to obtaining good server throughput. In this paper we explore serialized, aggressive disk prefetching, a technique which can be used to improve the throughput of HTTP streaming video web servers. We identify how serialization and aggressive prefetching affect performance and, based on our findings, we construct and evaluate Libception, an application-level shim library that implements both techniques. By dynamically linking against Libception at runtime, applications are able to transparently obtain benefits from serialization and aggressive prefetching without needing to change their source code. In contrast to other approaches that modify applications, make kernel changes, or attempt to optimize kernel tuning, Libception provides a portable and relatively simple system in which techniques for optimizing I/O in HTTP video streaming servers can be implemented and evaluated. We empirically evaluate the efficacy of serialization and aggressive prefetching both with and without Libception, using three web servers (Apache, nginx and the userver) running on two operating systems (FreeBSD and Linux). We find that, by using Libception, we can improve streaming throughput for all three web servers by at least a factor of 2 on FreeBSD and a factor of 2.5 on Linux. Additionally, we find that with significant tuning of Linux kernel parameters, we can achieve similar performance to Libception by globally modifying Linuxs disk prefetch behaviour. Finally, we demonstrate Libceptions potential utility for improving the performance of other workloads by using it to reduce the completion time for a microbenchmark involving two applications competing for disk resources.

An Experimental Performance Evaluation of Autoscalers for Complex Workflows

Elasticity is one of the main features of cloud computing allowing customers to scale their resources based on the workload. Many autoscalers have been proposed in the past decade to decide on behalf of cloud customers when and how to provision resources to a cloud application based on the workload utilizing cloud elasticity features. However, in prior work, when a new policy is proposed, it is seldom compared to the state-of-the-art, and is often compared only to static provisioning using a predefined QoS target. This reduces the ability of cloud customers and of cloud operators to choose and deploy an autoscaling policy as there is seldom enough analysis on the performance of the autoscalers in different operating conditions and with different applications. In our work, we conduct an experimental performance evaluation of autoscaling policies, using as application model workflows, a commonly used formalism for automating resource management for applications with well-defined yet complex structures. We present a detailed comparative study of general state-of-the-art autoscaling policies, along with two new workflow-specific policies. To understand the performance differences between the seven policies, we conduct various forms of pairwise and group comparisons. We report both individual and aggregated metrics. As many workflows have deadline requirements on the tasks, we study the effect of autoscaling on workflow deadlines. Additionally, we look into the effect of autoscaling on the accounted and hourly-based charged costs, and evaluate performance variability caused by the autoscaler selection for each group of workflow sizes. Our results highlight the trade-offs between the suggested policies, how they can impact meeting the deadlines, and how they perform in different operating conditions, thus enabling a better understanding of the current state-of-the-art.

An Empirical Analysis of Amazon EC2 Spot Instance Features Affecting Cost-effective Resource Procurement

Many cost-conscious public cloud workloads (tenants) are turning to Amazon EC2s spot instances because, on average, these instances offer significantly lower prices (up to 10 times lower) than on-demand and reserved instances of comparable advertized resource capacities. To use spot instances effectively, a tenant must carefully weigh the lower costs of these instances against their poorer availability. Towards this, we empirically study four features of EC2 spot instance operation that a cost-conscious tenant may find useful to model. Using extensive evaluation based on both historical and current spot instance data, we show shortcomings in the state-of-the-art modeling of these features that we overcome. Our analysis reveals many novel properties of spot instance operation some of which offer predictive value while others do not. Using these insights, we design predictors for our features that offer a balance between computational efficiency (allowing for online resource procurement) and cost-efficacy. We explore case studies wherein we implement prototypes of dynamic spot instance procurement advised by our predictors for two types of workloads. Compared to the state-of-the-art, our approach achieves (i) comparable cost but much better performance (fewer bid failures) for a latency-sensitive in-memory Memcached cache, and (ii) an additional 18% cost-savings with comparable (if not better than) performance for a delay-tolerant batch workload.

Access-time aware cache algorithms

Most of the caching algorithms are oblivious to requests' timescale, but caching systems are capacity constrained and, in practical cases, the hit rate may be limited by the cache's impossibility to serve requests fast enough. In particular, the hard-disk access time can be the key factor capping cache performance. In this paper, we present a new cache replacement policy that takes advantage of a hierarchical caching architecture, and in particular of access-time difference between memory and disk. Our policy is optimal when requests follow the independent reference model, and significantly reduces the hard-disk load, as shown also by our realistic, trace-driven evaluation. Moreover, we show that our policy can be considered in a more general context, since it can be easily adapted to minimize any retrieval cost, as far as costs add over cache misses.

Bargaining Game Based Scheduling for Performance Guarantees in Cloud Computing

In this paper, we focus on request scheduling with performance guarantees of all users in cloud computing. Each cloud user submits requests with average response time requirement, and the cloud provider tries to find a scheduling scheme, i.e., allocating user requests to limited servers, such that the average response times of all cloud users can be guaranteed. We formulate the considered scenario into a cooperative game among multiple users and try to find a Nash bargaining solution (NBS), which can simultaneously satisfy all users$'$ performance demands. We first prove the existence of NBS and then analyze its computation. Specifically, for the situation when all allocating substreams are strictly positive, we propose a computational algorithm ($\mathcal{CA}$), which can find the NBS very efficiently. For the more general case, we propose an iterative algorithm ($\mathcal{IA}$), which is based on duality theory. The convergence of our proposed $\mathcal{IA}$ algorithm is also analyzed. Finally, we conduct some numerical calculations. The experimental results show that our $\mathcal{IA}$ algorithm can find an appropriate scheduling strategy and converges to a stable state very quickly.

What you lose when you snooze: how duty cycling impacts on the contact process in opportunistic networks

In opportunistic networks, putting devices in energy saving mode is crucial to preserve their battery, and hence to increase the lifetime of the network and foster user participation. A popular strategy for energy saving is duty cycling. However, when in energy saving mode, users cannot communicate with each other. The side effects of duty cycling are twofold. On the one hand, duty cycling may reduce the number of usable contacts for delivering messages, increasing intercontact times and delays. On the other hand, duty cycling may break long contacts into smaller contacts, thus also reducing the capacity of the opportunistic network. Despite the potential serious effects, the role played by duty cycling in opportunistic networks has been often neglected in the literature. In order to fill this gap, in this paper we propose a general model for deriving the pairwise contact and intercontact times measured when a duty cycling policy is superimposed on the original encounter process determined only by node mobility. The model we propose is general, i.e., not bound to a specific distribution of contact and intercontact times, and very accurate, as we show exploiting two traces of real human mobility for validation. Using this model, we derive several interesting results about the properties of measured contact and intercontact times with duty cycling: their distribution, how their coefficient of variation changes depending on the duty cycle value, how the duty cycling affects the capacity and delay of an opportunistic network. The applicability of these results is broad, ranging from performance models for opportunistic networks that factor in the duty cycling effect, to the optimisation of the duty cycle to meet a certain target performance.

Selecting the top-quality item through crowd scoring

We investigate crowdsourcing algorithms for finding the top-quality item within a large collection of objects with unknown intrinsic quality values. This is an important problem with many relevant applications, for example in networked recommendation systems. The core of the algorithms is that objects are distributed to crowd workers, who return a noisy and biased evaluation. All received evaluations are then combined, to identify the top-quality object. We first present a simple probabilistic model for the system under investigation. Then, we devise and study a class of efficient adaptive algorithms to assign in an effective way objects to workers. We compare the performance of several algorithms, which correspond to different choices of the design parameters/metrics. In the simulations we show that some of the algorithms achieve near optimal performance for a suitable setting of the system parameters.

All ACM Journals | See Full Journal Index

Search TOMPECS
enter search term and/or author name