skip to content

Search

Day 2: AI Problem-Solving vs. Engineering Logic

5 min read

Day 2: Different AI approaches and how they fundamentally shifted how I think about problem-solving

AI Problem-Solving vs. Engineering Logic: A Developer’s Perspective

Day 2 of learning AI as an engineer

Yesterday I started my journey into understanding AI by classifying AI, and today’s deep dive into different AI approaches has fundamentally shifted how I think about problem-solving. As someone who’s been writing code for over as long as I have, I’m discovering that everything I know about building solutions is getting a major upgrade.

The Rule-Based Comfort Zone

As engineers, we’re trained to think in explicit logic. We write functions that behave predictably:

function calculateShippingCost(weight, distance, priority) {
    if (priority === 'express') {
        return weight * 0.5 + distance * 0.1 + 15;
    } else if (priority === 'standard') {
        return weight * 0.3 + distance * 0.05 + 5;
    }
    // Every possible scenario accounted for
}

This deterministic approach gives us control. We can trace through every line of code, predict outcomes, and debug by following logical paths. When something breaks, we know exactly where to look because we wrote every rule.

Our entire debugging methodology is built on this foundation: reproduce the issue, trace the execution, identify the faulty logic, fix the code. It’s systematic, predictable, and comforting.

The Pattern Recognition Shift

Today I learned that Machine Learning flips this entire approach on its head. Instead of writing explicit rules, you show the system examples and let it figure out the patterns.

It’s like the difference between teaching someone to cook with detailed recipes versus having them taste thousands of dishes and learn what makes them good. The traditional engineering approach is the recipe book - precise, explicit, controllable. The ML approach is more like developing culinary intuition through experience.

This shift from “programming” to “learning” is profound. We go from being rule-writers to example-curators. The system discovers patterns we might never have thought to code explicitly.

Search Algorithms as Thinking Strategies

What struck me today was how AI search algorithms mirror debugging strategies I already use:

Brute Force Search is like when I’m debugging a complex issue and systematically check every possible cause. It’s thorough but can be time-consuming.

Heuristic Search reminds me of experienced debugging - using “rules of thumb” to guide where I look first. If the app is slow, check the database queries. If users can’t log in, check authentication flow.

Genetic Algorithms are like the iterative process of code optimization - try different approaches, keep what works best, and evolve the solution over time.

These aren’t just abstract AI concepts; they’re formalized versions of thinking strategies I’ve been using intuitively for years.

The Debugging Paradigm Change

This is where my brain started to hurt (in a good way). Traditional debugging follows a clear path:

  1. Reproduce the bug
  2. Trace through the code logic
  3. Identify the faulty line/logic
  4. Fix the explicit rule
  5. Test to confirm fix

ML debugging seems to work differently:

  1. Examine the training data quality
  2. Evaluate model performance metrics
  3. Adjust parameters or provide more examples
  4. Retrain and test
  5. Accept probabilistic rather than deterministic outcomes

The shift from “fix the logic” to “improve the learning” requires a completely different mindset. Instead of controlling every decision the code makes, we’re guiding how the system learns to make decisions.

Optimization Mindset Evolution

In traditional engineering, optimization often means:

  • Reducing algorithm complexity (O(n) vs O(n²))
  • Minimizing memory usage
  • Improving execution speed
  • Optimizing database queries

In AI, optimization seems to focus on:

  • Model accuracy and performance metrics
  • Training efficiency
  • Generalization vs. overfitting
  • Parameter tuning for better predictions

Both are about making things work better, but they’re measuring “better” in fundamentally different ways.

Real-World Application

As I’m learning these concepts, I can’t help but think about problems I encounter in web development. Some scenarios that might benefit from pattern recognition rather than explicit rules:

  • User behavior prediction: Instead of hardcoding business rules about user preferences, let the system learn from interaction patterns
  • Error categorization: Rather than manually classifying every possible error type, train a model to recognize error patterns
  • Content personalization: Instead of writing complex algorithms for what content to show, learn from user engagement patterns

The key insight is recognizing when you’re fighting complexity with increasingly complicated rule systems - that might be a signal that pattern recognition could work better.

The Hybrid Future

What’s becoming clear is that this isn’t an either/or situation. The future likely involves combining traditional engineering with AI approaches strategically.

Use explicit rules when:

  • Logic is well-defined and unlikely to change
  • Decisions need to be explainable and auditable
  • Edge cases are manageable
  • Deterministic behavior is required

Consider AI approaches when:

  • Patterns exist but are too complex to code explicitly
  • Large amounts of example data are available
  • Requirements evolve frequently
  • Perfect accuracy isn’t required, but good-enough predictions are valuable

The Journey Continues

Two days into this AI learning journey, I’m realizing this isn’t just about adding new tools to my toolkit - it’s about fundamentally expanding how I approach problem-solving.

The engineer in me appreciates the systematic nature of these approaches. The patterns and algorithms are logical, even if the outcomes are probabilistic. It’s not magic; it’s a different kind of engineering.

Tomorrow I’m diving into Machine Learning fundamentals. I suspect my understanding of what “training” a model actually means is about to get a lot more concrete.

For fellow engineers starting this journey: embrace the discomfort of probabilistic thinking. Our systematic, logical approach to problems is an asset here - we just need to apply it to a new category of solutions.


This is part of my public AI learning journey. Follow along as I document the discoveries, struggles, and breakthroughs of an engineer learning AI from scratch.