OpenAI Codex lead says humans’ typing speed is limiting AGI development

0
9
OpenAI Codex lead says humans’ typing speed is limiting AGI development

OpenAI Codex lead says humans’ typing speed is limiting AGI development

Alexander Embiricos, head of OpenAI Codex, believes that the development of Artificial General Intelligence (AGI) is currently limited due to human typing speed. He suggested that we need to train AI agents that can review the output from AI models instead of humans.

Advertisement
OpenAI Codex lead says humans’ typing speed is limiting AGI development
AI generated symbolic image

Artificial intelligence (AI) is developing rapidly. Tech companies like Google and OpenAI are making huge strides in AI as we speak. However, despite hundreds of billions invested in AI infrastructure, the goal of Artificial General Intelligence (AGI) remains out of reach. AGI refers to the state of intelligence where AI can think and reason like humans. Now, OpenAI’s Codex lead, Alexander Embiricos, has claimed that the real shortcomings in AGI development are not AI models, but the typing speed of humans.

Advertisement

Why is human typing speed limiting AGI development?

Embirico shared his thoughts on an episode of Lenny’s Podcast. He acknowledged that the real limitation in reaching AGI right now is human typing speed and manual prompt management. He called “human typing speed” or “human multi-tasking speed at writing prompts” a “currently underappreciated limiting factor” for AGI.

According to Embiricos, the majority of current workflows rely on humans to process and review the output produced by AI agents. He stressed that we can see faster progress when AI agents can review work instead of humans. Alexander Embiricos said, “You can have an agent keep an eye on all the work you’re doing, but if you don’t have an agent validating your work, you’re still at the bottleneck, like, can you review all that code?”

What is the solution?

Despite the pace of progress in AI technologies, the bottleneck has shifted from the potential of AI to the speed at which humans can interact with these systems and validate the actions they perform. Embiricos believes that even with agents able to observe human actions, the need for humans to validate results significantly slows progress. In his words,

To overcome this obstacle, Embiricos advocates changes in the structure of AI systems. His view is that “we need to free humans from the burden of writing signals and validating the work of AI, because we are not fast enough.”

Alexander Embiricos believes that re-engineering AI systems to make agents “useful by default” will lead to exponential increases in productivity. As he puts it, “If we can rebuild the system to make the agent useful by default, we’ll start unlocking the hockey stick,” referencing the rapid growth characteristic of “hockey stick evolution.”

However, the OpenAI Codex lead acknowledged that there is no straightforward path to a fully automated workflow, as each application will likely demand its own tailored approach. However, he is confident that we will soon see progress at this level.

Looking ahead, Embiricos anticipates that early adopters will be the first to experience significant productivity gains. This will likely be followed by widespread automation by major technology companies. He said, “Starting next year, we’ll see early adopters of hockey hinged on their productivity, and in the years to come, we’ll see bigger and bigger companies hinged on that productivity of hockey,” he said.

Once we reach this level of productivity through AI automation, Alexander Embiricos believes the door will open to AGI. “That hockey-sticking will go back to the AI ​​labs, and only then will we basically be in AGI,” he said.

– ends

LEAVE A REPLY

Please enter your comment!
Please enter your name here