Day 17 of 133
Regularization (L1, L2, ElasticNet) + DSA Stack finish
Sparsity geometry, Bayesian priors, why ElasticNet exists.
DSA · NeetCode Stack
- Car FleetDSA · Stack
Interview questions to prep
- Why a stack here — what LIFO property does the problem exploit?
- If this uses a monotonic stack, state the monotonic invariant and how it's restored on each push.
- Walk through complexity: each element is pushed and popped at most once, so the total work is O(n).
- Largest Rectangle IN HistogramDSA · Stack
Interview questions to prep
- Walk through the monotonic stack — what does popping a bar tell you about its rectangle?
- How does this generalize to 'maximal rectangle in a binary matrix'?
ML · Regularization (L1, L2, ElasticNet)
Interview questions to prep
- Why does L1 produce sparse solutions while L2 doesn't? Show the geometric picture.
- When do you use ElasticNet over L1 or L2 alone?
Interview questions to prep
- Why is early stopping equivalent to L2 regularization in some cases?
- How would you choose the early-stopping patience and what happens when it's too small or too large?
Interview questions to prep
- Show that L2 regularization corresponds to a Gaussian prior on weights and L1 to a Laplace prior.
- When does the Bayesian framing actually change a modeling decision in practice?
References & further reading