As energy consumption has become a major constraint in current system design, it is essential to look beyond the traditional low-power circuit and architectural optimizations. Further, software is becoming an increasing portion of embedded/portable systems. Consequently, optimizing the software in conjunction with the underlying low-power hardware features such as voltage scaling is vital. In this paper, we present two compiler-directed energy optimization strategies based on voltage scaling: static voltage scaling and dynamic voltage scaling. In static voltage scaling, the compiler determines a single supply voltage level for the entire input program. We primarily aim at improving the energy consumption of a given code without increasing its execution time. To accomplish this, we employ classical loop-level compiler optimizations. However, we use these optimizations to create opportunities for voltage scaling to save energy, rather than increase program performance. In dynamic voltage scaling, the compiler can select different supply voltage levels for different parts of the code. Our compilation strategy is based on integer linear programming and can accommodate energy/performance constraints. For a benchmark suite of array-based scientific codes and embedded video/image applications, our experiments show average energy savings of 31.8% when static voltage scaling is used. Our dynamic voltage scaling strategy saves 15.3% more energy than static voltage scaling when invoked under the same performance constraints.