ACM DL

ACM Transactions on

Modeling and Performance Evaluation of Computing Systems (TOMPECS)

Menu
Latest Articles
NEWS

About TOMPECS

ACM Transactions on Modeling and Performance Evaluation of Computing Systems (ToMPECS) is a new ACM journal that publishes refereed articles on all aspects of the modeling, analysis, and performance evaluation of computing and communication systems.

The target areas for the application of these performance evaluation methodologies are broad, and include traditional areas such as computer networks, computer systems, storage systems, telecommunication networks, and Web-based systems, as well as new areas such as data centers, green computing/communications, energy grid networks, and on-line social networks.

Issues of the journal will be published on a quarterly basis, appearing both in print form and in the ACM Digital Library. The first issue will likely be released in late 2015 or early 2016.

READ MORE
Forthcoming Articles
Ensuring Persistent Content in Opportunistic Networks via Stochastic Stability Analysis

The emerging device-to-device communication solutions and the abundance of mobile applications and services make opportunistic networking not only a feasible solution, but also an important component of future wireless networks. Specifically, the distribution of locally relevant content could be based on the community of mobile users visiting an area, if long term content survival can be ensured this way. In this paper we establish the conditions of content survival in such opportunistic networks, considering the user mobility patterns, as well as the time users keep forwarding the content, as the controllable system parameter. We demonstrate that a tractable epidemic model adequately characterizes the content spreading process, and derive the necessary user contribution to ensure content survival. We show that the required contribution from the users depends significantly on the size of the population, that users need to redistribute content only in a short period within their stay, and that they can decrease their contribution significantly in crowded areas. Hence, with the appropriate control of the system parameters, opportunistic content sharing can be both reliable and sustainable.

Scale-out vs Scale-up: A Study of ARM-based SoCs on Server-class workloads

ARM 64-bit processing has generated enthusiasm to develop ARM-based servers that are targeted for both data centers and supercomputers. In addition to the server-class components and hardware advancements, ARM software environment has been growing substantially over the past decade. Major development ecosystems and libraries are ported and optimized to run on ARM environment, making ARM suitable for server-class workloads. There are two trends in available ARM SoCs in the market: the mobile-class ARM SoCs rely on heterogeneous integration of a mix of CPU cores, GPGPU streaming multiprocessors (SMs), and other accelerators, whereas the server-class SoCs instead rely on integrating a larger number of CPU cores with no GPGPU support and a number of IO accelerators. For scaling the number of processing cores, there are two different paradigm: mobile-class SoCs uses scale-out architecture in the form of a cluster of simpler systems connected with the network, whereas sever-class ARM SoCs uses the scale-up solution and leverage symmetric multiprocessing (SMP) to pack large number of cores on the chip. In this work, we present ScaleSoC cluster which is a scale-out solution based on mobile class ARM SoCs. ScaleSoC leverage fast network connectivity and GPGPU acceleration to improve performance and energy efficiency compared to previous ARM clusters. We consider a wide range of modern server-class parallel workloads including latency-sensitive transactional workloads, MPI-based CPU and GPGPU accelerated scientific applications, and emerging artificial intelligence workloads. We study the performance and energy efficiency of ScaleSoC compared to server-class ARM SoCs and discrete GPGPUs in depth for each type of server-class workloads. We quantify the network overhead on the performance of ScaleSoC and show packing a large number of ARM cores on a single chip does not necessarily guarantee better performance due to shared resources such as last level cache become the performance bottleneck. We characterize the GPGPU accelerated workloads and demonstrate for applications that can leverage the better CPU-GPGPU balance of ScaleSoC cluster, performance and energy efficiency both get improved compared to discrete GPGPUs. We also analyze the scalability and performance limitations of the proposed ScaleSoC cluster.

Mean Field Games in Nudge Systems for Societal Networks

We consider the general problem of resource sharing in societal networks, consisting of interconnected communication, transportation, energy and other networks important to the functioning of society. Participants in such network need to take decisions daily, both on the quantity of resources to use as well as the periods of usage. With this in mind we discuss the problem of incentivizing users to behave in such a way that society as a whole benefits. In order to perceive societal level impact such incentives may take the form of rewarding users with lottery tickets based on good behavior, and periodically conducting a lottery to translate these tickets into real rewards. We will pose the user decision problem as a mean field game (MFG), and the incentives question as one of trying to select a good mean field equilibrium (MFE). In such a framework, each agent (a participant in the societal network) takes a decision based on an assumed distribution of actions of his/her competitors, and the incentives provided by the social planner. The system is said to be at MFE if the agent's action is a sample drawn from the assumed distribution. We will show the existence of such an MFE under different settings, and also illustrate how to choose an attractive equilibrium using as an example demand-response in energy networks.

System and Architecture Level Characterization of Big Data Applications on Big and Little Core Server Architectures

The rapid growth in the data yields challenges to process data efficiently using current high performance server architectures such as big Xeon cores. Furthermore, physical design constraints, such as power and density, have become the dominant limiting factor for scaling out servers. Low-power embedded cores in servers such as little Atom have emerged as a promising solution to enhance energy-efficiency to address these challenges. Therefore, the question of whether to process the big data applications on big Xeon or Little Atom based servers becomes important. In this work, through methodical investigation of power and performance measurements, and comprehensive applicationlevel, system-level and micro-architectural level analysis, we characterize dominant big data applications on big Xeon and little Atom-based server architectures. The characterization results across a wide range of real-world big data applications and various software stacks demonstrate how the choice of big vs little core-based server for energyefficiency is significantly influenced by the size of data, performance constraints, and presence of accelerator. In addition, we analyze processor resource utilization of this important class of applications such as memory footprints, CPU utilization and disk bandwidth to understand their run-time behavior. Furthermore, we perform micro-architecture level analysis to highlight where improvement is needed in big and little core microarchitectures to address their performance bottlenecks.

QMLE: a Methodology for Statistical Inference of Service Demands from Queueing Data

Estimating the demands placed by services on physical resources is an essential step for the definition of performance models. For example, scalability analysis relies on these parameters to predict queueing delays under increasing loads. In this paper, we investigate maximum likelihood (ML) estimators for demands at load-independent and load-dependent resources in systems with parallelism constraints. We define a likelihood function based on state measurements and derive necessary conditions for its maximization. We then obtain novel estimators that accurately and inexpensively obtain service demands using only aggregate state data. With our approach, confidence intervals can be rigorously derived, explicitly taking into account both topology and concurrency levels of the services. Our estimators and their confidence intervals are validated against simulations and real system measurements for two multi-tier applications, showing high accuracy also in the presence of load-dependent resources.

All ACM Journals | See Full Journal Index

Search TOMPECS
enter search term and/or author name