In light of growing concern about the energy requirements of large machine learning models, a recent study from MIT Lincoln Laboratory and Northeastern University has investigated the savings that can be made by power-capping GPUs employed in model training and inference
June 6, 2022

Unite.AI reports on a recent paper from MIT Lincoln Laboratory and Northeastern University investigating the savings that can be made by power-capping GPUs employed in model training and inference, as well as several other techniques and methods of cutting down AI energy usage.

The new work also calls for new AI papers to conclude with an ‘Energy Statement’ (similar to the recent trend for ‘ethical implication’ statements in papers from the machine learning research sector).

The chief suggestion from the work is that power-capping (limiting the available power to the GPU that’s training the model) offers worthwhile energy-saving benefits, particularly for Masked Language Modeling (MLM), and frameworks such as BERT and its derivatives.