Day 13 of 133

Bias-variance, double descent, learning curves

The framing every model debugging conversation falls back on.

DSA · NeetCode Bit Manipulation

  • Sum OF Two IntegersDSA · Bit Manipulation

    Interview questions to prep

    1. Walk me through the bit trick used here, bit by bit on a small input.
    2. Why XOR / AND / shift specifically — what property of that operation does the problem exploit?
    3. What's the complexity in terms of bits (often O(32) → O(1)), and where could that break for big-int?
  • Reverse IntegerDSA · Bit Manipulation

    Interview questions to prep

    1. Walk me through the bit trick used here, bit by bit on a small input.
    2. Why XOR / AND / shift specifically — what property of that operation does the problem exploit?
    3. What's the complexity in terms of bits (often O(32) → O(1)), and where could that break for big-int?

ML · Bias-variance trade-off

  • Interview questions to prep

    1. Decompose expected squared error into bias², variance, and irreducible noise.
    2. Why does adding more training data reduce variance but not bias?
  • Interview questions to prep

    1. Explain the double-descent phenomenon. How does it overturn classical bias-variance intuition?
    2. Why do over-parameterized models often generalize well in deep learning?
  • Interview questions to prep

    1. How do you read a learning curve to decide between more data, regularization, or a bigger model?
    2. What does a large gap between training and validation curves usually mean — and what shrinks it?

References & further reading