In this abstract propose a two-time-scale control algorithm aimed at reducing power cost and facilitating a power cost versus delay trade-off in geographically distributed data centers. By extending the traditional Lyapunov optimization approach, which operates on a single time scale, to two different time scales, we derive analytical bounds on the time average power cost and service delay achieved by our algorithm. This paper considers a stochastic optimization approach for job scheduling and server management in large-scale, geographically distributed data centers. Our goal then is to minimize power cost by deciding, how to distribute the workload from the front-end servers to the back-end clusters, how many servers to activate at each back-end cluster at any given time; and how to set the service rates (or CPU power levels) of the back-end servers. The performance and robustness of the approach is illustrated through simulations.
You are here: Home / ieee projects 2014 / REDUCING POWER COST USING SAVE ALGORITHM IN DISTRIBUTED DATA CENTERS