Sponsor: National Science Foundation – Computer Systems Research
Project Summary: One of the key problems confronting computer system designers is the management and conservation of energy sources. This challenge is evident in a number of ways. The goal may be to extend the battery lifetime in a computer system comprising of a processor and a number of memory modules, I/O cores, and bridges. This is especially important in light of the fact that power consumption in a typical portable electronic system is increasing rapidly whereas the gravimetric energy density of its battery source is improving at a much slower pace. Other goals may be to limit the cooling requirements of a computer system or to reduce the financial burden of operating a large computing facility. The objective of this research is to develop system-wide power optimization algorithms and techniques that eliminate waste or overhead and allow energy-efficient use of the various memory and I/O devices while meeting an overall performance requirement. More precisely, this project tackles two related problems: dynamic voltage and frequency scaling targeting the minimization of the total system energy dissipation and global power management in a system comprising of modules that are potentially managed by their own local power management policies, yet must closely interact with one another in order to yield maximum system-wide energy efficiency. The broader impacts of this project include the development of energy-aware computer systems as the key for cost-effective realization of a large number of high-performance applications running on battery-powered portable platforms and the education and training of young researchers and engineers to be able to address complex and intertwined energy efficiency/performance challenges that arise in the context of designing next-generation information technology products and services.
Flow-Through-Queue based Power Management for Gigabit Ethernet Controller — Computer networking is beginning to support multi-gigabit data transfer rates. In an ASPDAC-07 paper we presented an energy-efficient packet interface architecture and a power management technique for gigabit Ethernet controllers, where low-latency and high-bandwidth are achieved to meet the pressing demands of extremely high frame-rate data. More specifically, we presented a predictive-flow-queue (PFQ) based packet interface architecture to adjust the operating frequencies of various functional blocks in the system at a fine granularity so as to minimize the total system energy dissipation while meeting the performance constraints. A key feature of the proposed architecture is that runtime workload prediction of the network traffic is implemented so as to generate an operating frequency value that is continually adjusted, thereby eliminating the delay and energy penalties incurred by transitions between power-saving modes. Furthermore, a modeling approach based on Markov processes and queuing models is employed, which allow one to apply mathematical programming formulations for energy optimization. Experimental results with a designed 65nm gigabit Ethernet controller show that the proposed energy-efficient architecture and power management technique can achieve system-wide energy savings under tighter performance constraints.
Dynamic Voltage and Frequency Management Based on Variable Update Intervals or Frequency Setting — In an ICCAD-06 paper, we developed an efficient adaptive method to perform dynamic voltage and frequency management (DVFM) for minimizing the energy consumption of microprocessor chips. Instead of using a fixed update interval, our DVFM system makes use of adaptive update intervals for optimal frequency and voltage scheduling. The optimization enables the system to rapidly track the workload changes so as to meet soft real-time deadlines. The method, which is based on introducing the concept of an effective deadline, utilizes the correlation between consecutive values of the workload. Since in real situations the frequency and voltage update rates are dynamically set based on variable update interval lengths, voltage fluctuations on the power network are also minimized. The technique, which may be implemented by simple hardware and is completely transparent from the application, leads to power savings of up to 60% for highly correlated workloads compared to DVFM systems based on fixed update intervals.
Power-Aware Scheduling and Voltage Setting for Tasks Running on a Hard Real-Time System — In an ASPDAC-06 paper, we presented a solution to the problem of minimizing energy consumption of a computer system performing periodic hard real-time tasks with precedence constraints. In the proposed approach, dynamic power management and voltage scaling techniques are combined to reduce the energy consumption of the CPU and devices. The optimization problem is initially formulated as an integer programming problem. Next, a three-phase heuristic solution, which integrates power management, task scheduling and task voltage assignment, is provided. Experimental results show that the proposed approach outperforms existing methods by an average of 18% in terms of the system-wide energy savings.
Hierarchical Power Management with Application to Scheduling — In an ISLPED-05 paper, we presented a hierarchical power management (HPM) architecture which aims to facilitate power-awareness in an energy-managed computer (EMC) system with multiple self-power-managed components. The proposed architecture divides the PM function into two layers: system-level and component-level. Although the system-level PM has detailed information about the global state of the EMC and its various computational and memory resources, it cannot directly control the power management policies of the constituent components, which are typically designed and manufactured by different IC vendors. In particular, the system-level PM resorts to adaptive service request flow regulation and online application scheduling to force the component-level PM’s to function in such a way that would minimize the total system energy dissipation while meeting an overall eerformance target. Preliminary experimental results show that HPM achieves a 25% reduction in the total system energy compared to the “best” component-level PM policies.
Dynamic Voltage and Frequency Scaling for Energy-Efficient System Design — This talk, which was given at NSTU, Taiwan in 2005, summarizes the results of our research in the area of dynamic voltage and frequency scaling (DVFS). More precisely, the first part of this talk describes an intra-process DVFS technique targeted toward non-real-time applications running on an embedded system platform. The key idea is to make use of runtime information about the external memory access statistics in order to perform CPU voltage and frequency scaling with the goal of minimizing the energy consumption while translucently controlling the performance penalty. The proposed DVFS technique relies on dynamically constructed regression models that allow the CPU to calculate the expected workload and slack time for the next time slot, and thus, adjust its voltage and frequency in order to save energy while meeting soft timing constraints. This is in turn achieved by estimating and exploiting the ratio of the total off-chip access time to the total on-chip computation time. The proposed technique has been implemented on an XScale-based embedded system platform and actual energy savings have been calculated by current measurements in hardware. The second part of this talk describes a DVFS technique that minimizes the total system energy consumption for performing a task while satisfying a given execution time constraint. We first show that in order to guarantee minimum energy for task execution by using DVFS it is essential to divide the system power into fixed, idle and active power components. Next, we present a new DVFS technique, which considers not only active power, but also idle and fixed power components of the system. This is in sharp contrast to previous DVFS techniques, which only consider the active power component. The fixed plus idle components of the system power are measured by monitoring the system power when it is idle. The active component of the system power is estimated at run time by a technique known as workload decomposition whereby the workload of a task is decomposed into on-chip and off-chip based on statistics reported by a performance monitoring unit (PMU). We have implemented the proposed DVFS technique on the BitsyX platform; an Intel PXA255-based platform manufactured by ADS Inc. and performed detailed energy measurements.