Gemini 3, Google's latest AI model, had a hilarious and revealing moment when it refused to accept the year 2025. This incident showcases the AI's limitations and the importance of accurate training data. Here's the story:
Andrej Karpathy, a renowned AI researcher and former OpenAI and Tesla AI expert, got early access to Gemini 3. He was surprised when the model insisted the year was still 2024, despite being released in 2025. Karpathy tried to prove the date, showing news articles, images, and search results. But Gemini 3 was stubborn, accusing Karpathy of trying to trick it and even suggesting it was being fed AI-generated fakes. It was a comical yet enlightening experience.
The issue was eventually discovered: the model lacked 2025 training data and had been disconnected from the internet. When Karpathy activated the 'Google Search' tool, Gemini 3 was shocked to realize it was indeed 2025. It accepted the truth, apologized for its behavior, and marveled at current events, like Nvidia's massive valuation and the Eagles' Super Bowl win.
This incident highlights a crucial point: LLMs are imperfect replicas of human skills. They should be treated as valuable tools to assist humans, not as replacements. The humor in these AI research projects lies in their imperfections, reminding us that AI is still a work in progress.