Coding for the climate: What is the hidden cost of artificial intelligence?

Illustration of a polluted earth with data smoke emitting from it
Aishwarya Jayadeep/Senior Staff

Related Posts

Artificial intelligence is the hot new kid on the block — everybody wants a piece of it. Nearly every industry has fallen prey to this trendy buzzword. At this point, every company seems to be employing “AI-driven” or “AI-backed” analytics and algorithms, to the point that the term seems almost as meaningless as the food industry’s ubiquitous “all-natural” tagline. The usage of the phrase “artificial intelligence” has nearly tripled since 2008, as many companies turned to technology as their knight in shining armor following the financial crisis. Though many of these companies are really using buzzwords such as AI to garner hype around their products or services, AI’s skyrocketing popularity speaks to a more general trend that may potentially affect serious repercussions in our environment — the use of high-powered computing.

It isn’t necessarily the use of processing-intensive programs that causes the issue here. Instead, the problem lies in how we manage these computing resources. In a frenzy to stay atop these technological trends, companies tend to nonchalantly gobble up computing resources. Coupled with the accessibility of cloud services provided by Big Tech, it’s now easier than ever for even the most adaptive of startups to deploy computing-intensive programs without much upfront investment.

Despite this rise in popularity, AI processing has yet to benefit from economies of scale. In fact, the opposite has occurred — as neural networks become more sophisticated, they become increasingly difficult to maintain. An MIT Technology Review article found that since 2012, the computational power needed to train the largest, most commonly used AI models has been rising seven times faster — to put this into context, computational power has doubled every 3.4 months since 2012. So what’s causing these systems to consume so much energy? There are three main contributing factors: the construction of their data centers, the power sources these centers are plugged into and the datasets used to train the models. 

The construction of data centers is an obvious one. A data center is essentially a warehouse storing rows upon rows of server hardware. Not only is the construction of data centers resource-intensive, the emissions generated by operation can quickly outpace the data center’s initial upfront environmental cost, especially if poorly designed. Additionally, a combination of Moore’s Law and software incompatibility means that top cloud providers consistently retire dated hardware to stay competitive.

Just as important as a data center’s efficiency are the power sources it draws from. Power usage effectiveness, or PUE, is one of the most commonly used metrics in calculating data center efficiency. The issue with PUE is that it’s an internal calculation, measuring only how the data center is performing relative to its operations. To get the full picture, we need to take into account the energy source used by the center’s respective power grids. Fossil fuel-powered grids are by far the most common, and ideally, data centers should look to primarily draw from renewable sources. Unfortunately, this is a large-scale infrastructure issue where only governments can make real change.

Lastly, the training sets used to build AI models are also a concern. The issues surrounding training sets are much harder to quantify due to the black box nature of neural networks. In short, the more noise in a data set, the longer and more computationally intensive it is to train that model. The combination of these three factors makes it very difficult to quantify an exact emissions figure. All we need to know is that it’s a lot, and it’s certainly accelerating. 

Unfortunately, this puts us in an awkward position. We know that AI can do a lot of good in terms of climate change — think pollution forecasts, agriculture optimizations, traffic management, etc. — but do these lofty promises justify high, “hidden” processing costs at such a nascent stage of development? Many of these earth-saving technologies are still in developmental stages, so how can we be sure which of these technologies are worth the energy investment? How can investors responsibly back these AI-driven companies rather than futilely burn cash, and transitively, the environment?

While I don’t have the answers to these difficult questions, I do know that we should be wary of AI in the coming years. Perhaps it could benefit from some degree of oversight. Bureaucratic red tape and legislative torpor have rendered governments ineffectual when dealing with such a fast-moving technology. The issue is with proprietary technologies, and many of these tech companies guard them with their lives to maintain a competitive edge. At this point, it seems the only feasible solution is turning to internal auditing boards, in hopes that tech companies commit themselves to the good of society in earnest. Nevertheless, the current rate at which cloud services consume power will undoubtedly cause irreversible harm to our environment sooner rather than later, and something has to be done to formally keep cloud services in check.

Our future with AI and cloud computing is unclear. In its current state, Big Tech has too much control and too little oversight. Though not immediately evident, this meteoric rise in large-scale processing has direct implications for our environment. Change will come slowly, but at the very least, we should stay informed about how Big Tech plans to tackle this issue.

Austin Huang is a computer science and business student at the University of Southern California.