Facebook’s ex-privacy chief says efficiency, not strength, will decide who wins the AI race
Former Facebook privacy chief Chris Kelly argues that the future of AI and the winner of the race will be decided by how efficiently companies can make their models smarter, as rising data center costs and energy demands become the industry’s biggest hurdles.

Artificial intelligence is changing the world we live in. It has also become the new competitive edge that every big tech company is racing toward. But what edge will a company actually get? So far, the race has largely been about building smarter machine learning systems. However, the benefits in the coming years will come not just from intelligence, but also from efficiency.
You see, these AI systems are incredibly power-hungry, consuming vast—really vast—amounts of energy to keep data centers running. As a result, how much intelligence can be provided using the least amount of energy and at the lowest cost could become the defining factor. At least that’s what Chris Kelly, Facebook’s former chief privacy officer and general counsel, believes.
Speaking in a recent CNBC interview, Kelly said that the AI industry is reaching a point where simply adding more computing power is not enough. As models grow larger and workloads become more complex, energy consumption and infrastructure costs are becoming major bottlenecks. According to Kelly, companies that learn to do more with less will gain a decisive advantage.
AI infrastructure and energy consumption
The scale of investment already coming into AI infrastructure underlines how serious the challenge is. According to recent data from S&P Global, data center dealmaking is set to surpass $61 billion in 2025 as tech giants like OpenAI, Google, Meta, XAI and Cloths race to build facilities capable of handling massive AI workloads. However, this construction boom has increased the demand for electricity, cooling systems, land and specialized hardware, putting pressure not only on company balance sheets but also on the power grid.
And while much of the race to artificial intelligence today is focused on making machines smart enough to reach general intelligence – often called AGI, where machines can match human-level intelligence – Kelly argues that human intelligence itself runs on significantly less energy. In his view, companies will need to approach machine intelligence in a similar way.
“We run our brains at 20 watts… We don’t need gigawatt power stations to reason,” he said. “I think finding efficiency will be one of the key things that the big AI players focus on.”
For Kelly, to find the next big breakthrough in AI, companies will need to focus not just on raw scale, but also on smart engineering that will help them reduce the costs and energy required to train and run models. According to Kelly, AI companies that are able to tackle this problem are likely to emerge as long-term leaders in the AI race.
Power consumption, in particular, is already one of the most pressing concerns in the industry. In September, Nvidia and OpenAI revealed plans for at least 10 gigawatts of new data center capacity. According to the New York Independent System Operator, this amount of electricity is equivalent to the annual use of approximately eight million American homes or the peak summer demand of New York City in 2024.
So now as we look towards a future where AI models will be bigger, smarter and more demanding, they will also consume more energy. So for companies aiming to gain an edge, the focus will not just be on brute force, but also on training these systems how to think better while using less power.


