ACM DL

ACM Transactions on

Modeling and Performance Evaluation of Computing Systems (TOMPECS)

Menu
Latest Articles

The Economics of the Cloud

This article proposes a model to study the interaction of price competition and congestion in the cloud computing marketplace. Specifically, we propose a three-tier market model that captures a marketplace with users purchasing services from Software-as-a-Service (SaaS) providers, which in turn purchase computing resources from either... (more)

Behavioral Model of IEEE 802.15.4 Beacon-Enabled Mode Based on Colored Petri Net

The IEEE 802.15.4 standard is widely employed in power-constrained scenarios, such as Wireless Sensor Networks deployments. Therefore, modeling this... (more)

NEWS

About TOMPECS

ACM Transactions on Modeling and Performance Evaluation of Computing Systems (ToMPECS) is a new ACM journal that publishes refereed articles on all aspects of the modeling, analysis, and performance evaluation of computing and communication systems.

The target areas for the application of these performance evaluation methodologies are broad, and include traditional areas such as computer networks, computer systems, storage systems, telecommunication networks, and Web-based systems, as well as new areas such as data centers, green computing/communications, energy grid networks, and on-line social networks.

Issues of the journal will be published on a quarterly basis, appearing both in print form and in the ACM Digital Library. The first issue will likely be released in late 2015 or early 2016.

READ MORE
Forthcoming Articles
Access-time aware cache algorithms

Most of the caching algorithms are oblivious to requests' timescale, but caching systems are capacity constrained and, in practical cases, the hit rate may be limited by the cache's impossibility to serve requests fast enough. In particular, the hard-disk access time can be the key factor capping cache performance. In this paper, we present a new cache replacement policy that takes advantage of a hierarchical caching architecture, and in particular of access-time difference between memory and disk. Our policy is optimal when requests follow the independent reference model, and significantly reduces the hard-disk load, as shown also by our realistic, trace-driven evaluation. Moreover, we show that our policy can be considered in a more general context, since it can be easily adapted to minimize any retrieval cost, as far as costs add over cache misses.

Bargaining Game Based Scheduling for Performance Guarantees in Cloud Computing

In this paper, we focus on request scheduling with performance guarantees of all users in cloud computing. Each cloud user submits requests with average response time requirement, and the cloud provider tries to find a scheduling scheme, i.e., allocating user requests to limited servers, such that the average response times of all cloud users can be guaranteed. We formulate the considered scenario into a cooperative game among multiple users and try to find a Nash bargaining solution (NBS), which can simultaneously satisfy all users$'$ performance demands. We first prove the existence of NBS and then analyze its computation. Specifically, for the situation when all allocating substreams are strictly positive, we propose a computational algorithm ($\mathcal{CA}$), which can find the NBS very efficiently. For the more general case, we propose an iterative algorithm ($\mathcal{IA}$), which is based on duality theory. The convergence of our proposed $\mathcal{IA}$ algorithm is also analyzed. Finally, we conduct some numerical calculations. The experimental results show that our $\mathcal{IA}$ algorithm can find an appropriate scheduling strategy and converges to a stable state very quickly.

What you lose when you snooze: how duty cycling impacts on the contact process in opportunistic networks

In opportunistic networks, putting devices in energy saving mode is crucial to preserve their battery, and hence to increase the lifetime of the network and foster user participation. A popular strategy for energy saving is duty cycling. However, when in energy saving mode, users cannot communicate with each other. The side effects of duty cycling are twofold. On the one hand, duty cycling may reduce the number of usable contacts for delivering messages, increasing intercontact times and delays. On the other hand, duty cycling may break long contacts into smaller contacts, thus also reducing the capacity of the opportunistic network. Despite the potential serious effects, the role played by duty cycling in opportunistic networks has been often neglected in the literature. In order to fill this gap, in this paper we propose a general model for deriving the pairwise contact and intercontact times measured when a duty cycling policy is superimposed on the original encounter process determined only by node mobility. The model we propose is general, i.e., not bound to a specific distribution of contact and intercontact times, and very accurate, as we show exploiting two traces of real human mobility for validation. Using this model, we derive several interesting results about the properties of measured contact and intercontact times with duty cycling: their distribution, how their coefficient of variation changes depending on the duty cycle value, how the duty cycling affects the capacity and delay of an opportunistic network. The applicability of these results is broad, ranging from performance models for opportunistic networks that factor in the duty cycling effect, to the optimisation of the duty cycle to meet a certain target performance.

All ACM Journals | See Full Journal Index

Search TOMPECS
enter search term and/or author name