Part of my “Learning AI in Public” series - documenting my journey to better understand AI and help others reduce the fear around it
Why I’m Learning AI in Public
I’ll be honest: I use AI tools regularly - ChatGPT, Claude, GitHub Copilot, Cursor, and others — they’re part of my daily workflow. But using tools and understanding the underlying technology are two very different things.
I started this learning journey for two reasons: to increase my AI knowledge and experience, and to reduce the misconceptions and fear around AI. There’s so much hype, marketing, and conflicting information that it’s hard to separate reality from speculation. As engineers, we need to understand what we’re actually working with.
That’s why I’m documenting everything publicly. Maybe my learning process can help others navigate this landscape too.
The Classification System I Didn’t Know I Needed
Today I dove into AI types and classification. I spent time reading through multiple sources—IBM’s AI types overview, Coursera articles on narrow intelligence, various blogs breaking down AI categories—and asking Claude and Gemini direct questions about how AI is actually organized.
The breakthrough wasn’t learning something completely new. It was getting a clear framework that organized concepts I’d been thinking about loosely.
AI by Strength:
- ANI (Artificial Narrow Intelligence): AI focused on specific tasks
- AGI (Artificial General Intelligence): Human-level reasoning across domains
- ASI (Artificial Super Intelligence): Beyond human capability
The surprise? We’re still entirely in the ANI phase. Everything I use daily—ChatGPT, Copilot, all of it, is narrow intelligence, just very sophisticated narrow intelligence.
The Marketing vs. Reality Gap
Here’s what caught me off guard: all the news coverage and marketing makes it seem like we’ve moved beyond ANI. Headlines about AI “reasoning,” “understanding,” and “thinking” implied we were approaching something closer to general intelligence.
But we haven’t. The tools that feel so capable are still narrow systems, each trained for specific types of tasks. ChatGPT excels at text generation and conversation. Copilot is trained specifically on code completion. They’re incredibly good at their narrow domains, but they’re not generalized intelligence.
Understanding this distinction clarified a lot of the confusion I’d been carrying around AI capabilities and limitations.
What Felt Familiar as an Engineer
Learning about AI classification reminded me of learning new programming languages or frameworks. Once you understand the landscape and basic structure, the details start falling into place more naturally.
The categorization by strength (ANI vs AGI vs ASI) and functionality (like machine learning, natural language processing, computer vision) gave me the mental framework I needed. It’s like understanding the syntax of a new language—once you have the structure, you can start building real understanding.
The concepts themselves weren’t foreign. As engineers, we work with specialized systems all the time. Understanding that current AI is essentially very sophisticated specialized systems made it click.
Tomorrow’s Direction
Today’s learning raised the obvious next question: if everything we use is narrow AI, what are the practical boundaries of these systems? What tasks can current AI actually handle well, and where do they break down?
Next I’m focusing on current AI capabilities—not the theoretical future stuff, but the practical reality of what today’s narrow intelligence systems can and can’t do.
Following along with my learning journey? I’m documenting this process to help other engineers understand AI without the marketing hype. What’s been your experience with AI tools? Have you noticed the gap between capabilities and the way they’re marketed?