Geographically distributed data centers are contains a huge number of servers. Our goal then is to minimize power cost by deciding, how to distribute the workload from the front-end servers to the back-end clusters, how many servers to activate at each back-end cluster at any given time; and how to set the service rates (or CPU power levels) of the back-end servers. Our proposed solution exploits temporal and spatial variations in the workload arrival process (at the front-end servers) and the power prices (at the back-end clusters) to reduce power cost. It also facilitates a cost versus delay trade-off, which allows data center operators to reduce power cost at the expense of increased service delay. Hence, our work is suited for delay tolerant workloads such as massively parallel cloud computing and data intensive MapReduce jobs. We propose a two-time-scale control algorithm aimed at reducing power cost and facilitating a power cost versus delay trade-off in geographically distributed data centers. The performance and robustness of the approach is illustrated through simulations.
You are here: Home / ieee projects 2014 / STOCHASTIC POWER REDUCTION SCHEME BASED POWER COST REDUCTION IN DISTRIBUTED DATA CENTERS