Day 13 of 133
Bias-variance, double descent, learning curves
The framing every model debugging conversation falls back on.
DSA · NeetCode Bit Manipulation
- Sum OF Two IntegersDSA · Bit Manipulation
Interview questions to prep
- Walk me through the bit trick used here, bit by bit on a small input.
- Why XOR / AND / shift specifically — what property of that operation does the problem exploit?
- What's the complexity in terms of bits (often O(32) → O(1)), and where could that break for big-int?
- Reverse IntegerDSA · Bit Manipulation
Interview questions to prep
- Walk me through the bit trick used here, bit by bit on a small input.
- Why XOR / AND / shift specifically — what property of that operation does the problem exploit?
- What's the complexity in terms of bits (often O(32) → O(1)), and where could that break for big-int?
ML · Bias-variance trade-off
Interview questions to prep
- Decompose expected squared error into bias², variance, and irreducible noise.
- Why does adding more training data reduce variance but not bias?
Interview questions to prep
- Explain the double-descent phenomenon. How does it overturn classical bias-variance intuition?
- Why do over-parameterized models often generalize well in deep learning?
Interview questions to prep
- How do you read a learning curve to decide between more data, regularization, or a bigger model?
- What does a large gap between training and validation curves usually mean — and what shrinks it?
References & further reading