Tickets to the show – Feynman and the birth of the AI age, Sparks of AGI  

In 1945, while Richard Feynman was a young physicist working on the Manhattan Project he and other scientists were invited to attend a test of the new weapon known as the Trinity test. 

The scientists were situated about 20 miles away from the blast site at the base camp. They were instructed to wear welder’s goggles to protect their eyes from the intense flash of light caused by the explosion.

However, Feynman was curious and wanted to see the explosion without any filter. He came up with an idea to observe the blast through the windshield of a truck, knowing that the glass would absorb most of the harmful ultraviolet radiation.

As the bomb detonated, he watched in awe as the intense flash of light lit up the surroundings. He was one of the few people who saw the explosion directly without any protection.

Feynman later described the explosion as “a giant ball of yellow fire,” and he felt the heat from the explosion even from 20 miles away. In that moment, he realized the immense power of the atomic bomb and the potential devastation it could cause. 

This was the advent of the atomic age. Something truly new in the history of human civilization. 

How much would people have paid to be there? 

If the recent sale of a pair of Michael Jordan’s basketball shoes for USD $2.2 million is any indication, we can assume the amount is a staggering figure. 

The Birth Artificial General Intelligence and the AI Age 

It’s with this in mind that I share the below video featuring Sebastian Bubeck a Microsoft employee and AI researcher which was the inspiration for this post. 

Before watching this video, I counted myself among those skeptical as to whether the AI tools recent release actually qualified as intelligence. I, like many others who used the AI tools, found that they easily made mistakes that could be characterized as dumb from a human perspective. This in combination with pronouncements by prominent AI scientists about the limitations of these models, made me assume that these large language models were essentially fancy parrots, who would try their best to come up with something that sounded good without actually doing any deep reasoning. 

But watching the below video and its illustrations in the jump of capabilities that has occurred from ChatGPT-3 and ChatGPT-4 has nearly convinced me that something truly profound has been created. Something that is reasoning or doing some type of effective approximation to it. 

More specifically, the examples he provides of ChatGPT solving problems that involve creating representations of the world and correcting its mistakes are highly compelling. 

Based on this video, my own naive opinion is that it seems increasingly likely that we are standing at the beginning of another historical transition point and unlike the atomic age which required top security clearance and a 20 mile exclusion zone, the tickets to the show are free. 

Implicit versus explicit disruption, why AI will be more profound than The Bomb 

One final point I think worth making is how much larger the impact of AI is likely to be than the atomic bomb. Or perhaps rather how explicit the impact will be in our day-to-day lives by comparison. 

This is because atomic weapons have thankfully been mostly implicit (with the notable exception of the bombs dropped on Japan) weights on the balance of national power. They do not intrude on our daily lives. Which is not to say that they have had no impact but rather that it has been hidden. 

This has been noted by many including the author Yuval Noah Harrari in his book “Sapiens: A Brief History of Humankind.” Harari discusses how the threat of nuclear weapons, particularly during the Cold War, led to a sort of forced peace between superpowers. This, in turn, enabled cultural exchange and creativity to flourish, as nations were more focused on developing cultural, economic, and technological innovations instead of engaging in large-scale wars.

AI by contrast is likely to infiltrate our daily lives to a degree perhaps only matched by our mobile phones. We will use it while we drive, while we cook, while we game. Our children will have AI friends and teachers. And inevitably someday these children may ask us where we were when we first interacted with an AI. That is, if we are still referring to them as AI by that point. 

Other AI Content – Cal Newport Disagrees 

Right before I went to send this article out I came across an email update from the author and computer scientist Cal Newport revealing that he had just published an article What Kind of Mind is ChatGPT for his column in the New Yorker. 

In the article, which I recommend reading, after walking us non technical people through a highly simplified explanation of how a large language model works, Cal Newport makes the following argument about ChatGPT’s ability to reason or lack thereof:  

“A system like ChatGPT doesn’t create, it imitates. When you send it a request to write a Biblical verse about removing a sandwich from a VCR, it doesn’t form an original idea about this conundrum; it instead copies, manipulates, and pastes together text that already exists, originally written by human intelligences, to produce something that sounds like how a real person would talk about these topics. This is why, if you read the Biblical-VCR case study carefully, you’ll soon realize that the advice given, though impressive in style, doesn’t actually solve the original problem very well. ChatGPT suggests sticking a knife between the sandwich and VCR, to “pry them apart.” Even a toddler can deduce that this technique won’t work well for something jammed inside a confined slot. The obvious solution would be to pull the sandwich out, but ChatGPT has no actual conception of what it’s talking about—no internal model of a stuck sandwich on which it can experiment with different strategies for removal.” – Cal Newport, Emphasis Mine 

Those who watch the video from Sebastian Bubeck may note that creating this type of internal model is precisely what he sees evidence of ChatGPT-4 being able to do. Again, I would emphasize that I have no technical basis upon which to judge the validity of either person’s argument. 

Still, I wonder whether Cal Newport’s familiarity with the mechanistic processes of the large language models has biased him in this case. To paraphrase the philosopher David Hume, because Cal Newport sees so clearly how ChatGPT is an ‘it’ he believes it can never become a ‘should’.

Whoever ends up being right, I look forward to learning and listening to more debates of the future of AI. 

Leave a Reply

Your email address will not be published. Required fields are marked *