Day 15 of 133

Linear regression deep dive + DSA Stack kickoff

OLS derivation, MLE under Gaussian noise, assumptions, loss choice.

DSA · NeetCode Stack

  • Valid ParenthesesDSA · Stack

    Interview questions to prep

    1. Walk through the stack invariant on a small mixed input.
    2. What if the input might be malformed mid-stream — how do you fail fast?
  • Min StackDSA · Stack

    Interview questions to prep

    1. How do you support getMin in O(1) — what's the auxiliary stack trick?
    2. Can you do it with a single stack of pairs, and what's the trade-off?
  • Interview questions to prep

    1. Why a stack here — what LIFO property does the problem exploit?
    2. If this uses a monotonic stack, state the monotonic invariant and how it's restored on each push.
    3. Walk through complexity: each element is pushed and popped at most once, so the total work is O(n).

ML · Linear regression

  • Interview questions to prep

    1. Derive the OLS solution θ = (XᵀX)⁻¹Xᵀy. When is XᵀX not invertible?
    2. Show that OLS = MLE under Gaussian noise.
  • Interview questions to prep

    1. Walk through the four classical assumptions of linear regression and how to diagnose violations.
    2. What's heteroscedasticity and how do you fix it?
  • Interview questions to prep

    1. Compare MSE, MAE, and Huber loss — what do you use when outliers matter?
    2. Why is Huber loss differentiable and robust at the same time?
  • Interview questions to prep

    1. Implement a vectorized linear regression forward pass for X @ w + b and state the expected tensor shapes.
    2. Implement one gradient-descent training step for linear regression and explain loss vs cost vs prediction error.
    3. Does sklearn's LinearRegression use gradient descent or ordinary least squares? Why does that matter in an interview?

References & further reading