Day 15 of 133
Linear regression deep dive + DSA Stack kickoff
OLS derivation, MLE under Gaussian noise, assumptions, loss choice.
DSA · NeetCode Stack
- Valid ParenthesesDSA · Stack
Interview questions to prep
- Walk through the stack invariant on a small mixed input.
- What if the input might be malformed mid-stream — how do you fail fast?
- Min StackDSA · Stack
Interview questions to prep
- How do you support getMin in O(1) — what's the auxiliary stack trick?
- Can you do it with a single stack of pairs, and what's the trade-off?
- Evaluate Reverse Polish NotationDSA · Stack
Interview questions to prep
- Why a stack here — what LIFO property does the problem exploit?
- If this uses a monotonic stack, state the monotonic invariant and how it's restored on each push.
- Walk through complexity: each element is pushed and popped at most once, so the total work is O(n).
ML · Linear regression
Interview questions to prep
- Derive the OLS solution θ = (XᵀX)⁻¹Xᵀy. When is XᵀX not invertible?
- Show that OLS = MLE under Gaussian noise.
Interview questions to prep
- Walk through the four classical assumptions of linear regression and how to diagnose violations.
- What's heteroscedasticity and how do you fix it?
Interview questions to prep
- Compare MSE, MAE, and Huber loss — what do you use when outliers matter?
- Why is Huber loss differentiable and robust at the same time?
Interview questions to prep
- Implement a vectorized linear regression forward pass for X @ w + b and state the expected tensor shapes.
- Implement one gradient-descent training step for linear regression and explain loss vs cost vs prediction error.
- Does sklearn's LinearRegression use gradient descent or ordinary least squares? Why does that matter in an interview?
References & further reading
- Andrew Ng — Machine Learning Specialization ↗Coursera
- StatQuest — Statistics & ML playlists ↗YouTube
- scikit-learn user guide ↗scikit-learn
- NeetCode roadmap (full 250) ↗NeetCode