Day 4 of my AI learning journey as a Principal Engineer at MLB
The Setup: Why Real Work Over Toy Examples
Four days into my AI learning journey, I hit a wall with the typical “hello world” examples and generic use cases. As an engineer who’s spent years working on real-world JavaScript/TypeScript applications, I needed to know: How do these AI tools actually perform when faced with the messy, complex problems I deal with every day?
Most AI tutorials show you perfect scenarios - clean code snippets, straightforward questions, ideal use cases. But engineering work is rarely ideal. We’re debugging legacy code, making architectural decisions with incomplete information, and researching solutions under time pressure.
So instead of following another “ask ChatGPT to write a todo app” tutorial, I decided to throw real engineering challenges at three popular AI tools and see what happened.
The Methodology: Testing with Actual Engineering Work
I gave myself 30 minutes and three real scenarios that mirror my daily work:
Scenario 1: Code Review
- Took a complex JavaScript function from recent work (anonymized)
- Asked each tool to review for improvements, potential bugs, and best practices
- Looked for depth of analysis and actionable feedback
Scenario 2: Technical Explanation
- Chose a TypeScript error I recently encountered
- Asked each tool to explain what was happening and suggest fixes
- Evaluated clarity and accuracy of explanations
Scenario 3: Research Task
- Needed to compare modern web performance optimization approaches
- Asked each tool to summarize current best practices
- Assessed quality and recency of information
Each tool got the same prompts, and I documented their responses, response time, and my overall experience.
ChatGPT Results: The Patient Pairing Partner
Strengths: ChatGPT felt like pairing with that colleague who never makes you feel dumb for asking questions. When I threw the TypeScript error at it, the explanation was clear and approachable. It broke down complex concepts into digestible pieces and offered multiple solution approaches.
For debugging, it excelled at walking through the logic step-by-step and suggesting different angles to approach the problem. The tone was conversational and encouraging - something that matters more than you’d expect when you’re stuck on a tricky bug at 4 PM.
Limitations: The code review feedback was somewhat surface-level. It caught obvious issues but missed some subtle architectural concerns that a senior engineer might flag. Also, when I asked about recent web performance techniques, some suggestions felt slightly outdated.
Best Use Cases: Debugging sessions, learning new concepts, brainstorming alternative approaches, and those moments when you need someone to explain why your code isn’t working without judgment.
Claude Analysis: The Thorough Senior Engineer
Strengths: Claude approached the same code review like a meticulous senior engineer. The feedback was comprehensive, covering not just bugs but also maintainability, edge cases, and potential future issues. It suggested specific improvements with reasoning behind each recommendation.
For architectural questions, Claude provided thoughtful analysis of trade-offs. Instead of just saying “use this pattern,” it explained why and when different approaches make sense.
What Stood Out: The depth of technical analysis was impressive. Claude caught subtle issues in my code that I hadn’t noticed and provided context about why certain patterns might cause problems down the road.
Best Use Cases: Code reviews, architectural decisions, technical documentation, and any time you need thorough analysis with consideration for long-term implications.
Perplexity Findings: The Research Powerhouse
Strengths: Perplexity dominated the research scenario. When I asked about current web performance optimization techniques, it provided up-to-date information with sources, making it easy to verify and dive deeper.
Unlike traditional search, Perplexity synthesized information from multiple sources and presented it in a coherent, engineering-focused summary. No wading through outdated Stack Overflow answers or blog posts from 2018.
Unique Value: The ability to get current, sourced information quickly was game-changing. As engineers, we often need to research new libraries, compare approaches, or understand recent developments in our tech stack.
Best Use Cases: Technical research, staying current with framework updates, comparing libraries or approaches, and any time you need reliable, recent information fast.
The Engineer’s Takeaway: Building an AI Toolkit
The biggest insight from this testing? Don’t try to find “the one AI tool” - build a toolkit.
Each tool excels at different parts of the engineering workflow:
- Use ChatGPT when you’re learning, debugging, or need clear explanations
- Use Claude when you need thorough code review or architectural analysis
- Use Perplexity when you’re researching current best practices or comparing technical approaches
It’s like having different IDEs for different languages, or different debugging tools for different problems. The key is matching the right AI tool to the specific task at hand.
Next Steps: Integration into Daily Development
Based on this testing, here’s how I’m planning to integrate these tools into my actual work:
Morning Code Reviews: Start using Claude for systematic review of pull requests, especially for catching architectural issues I might miss.
Research Sessions: When evaluating new JavaScript frameworks or performance optimizations for our platform, lead with Perplexity to get current landscape information.
Debugging Partnerships: Keep ChatGPT handy for those frustrating debugging sessions where I need someone to walk through the logic with me.
Documentation: Test using Claude for improving technical documentation - its thorough analysis style might help create better internal docs.
The goal isn’t to replace engineering judgment, but to augment it. These tools can handle the initial analysis, research, and explanation work, freeing up mental energy for the creative problem-solving that still requires human insight.
This is Day 4 of my public AI learning journey. Tomorrow I’m diving into how these tools change our approach to problem-solving as engineers. What AI tools have become part of your engineering workflow?