Day 33 of 133

Trad-ML breadth review (regression, trees, ensembles, clustering)

60-min self-quiz on weeks 3-4. Note the questions you fumble.

DSA · NeetCode Stack

  • Daily TemperaturesDSA · Stack

    Interview questions to prep

    1. What's the monotonic-stack invariant, and how does each pop give you the answer?
    2. Why is the total work O(n) when the inner loop looks O(n²)?

ML · Linear regression

  • Interview questions to prep

    1. Derive the OLS solution θ = (XᵀX)⁻¹Xᵀy. When is XᵀX not invertible?
    2. Show that OLS = MLE under Gaussian noise.
  • Interview questions to prep

    1. Walk through the four classical assumptions of linear regression and how to diagnose violations.
    2. What's heteroscedasticity and how do you fix it?
  • Interview questions to prep

    1. Compare MSE, MAE, and Huber loss — what do you use when outliers matter?
    2. Why is Huber loss differentiable and robust at the same time?
  • Interview questions to prep

    1. Implement a vectorized linear regression forward pass for X @ w + b and state the expected tensor shapes.
    2. Implement one gradient-descent training step for linear regression and explain loss vs cost vs prediction error.
    3. Does sklearn's LinearRegression use gradient descent or ordinary least squares? Why does that matter in an interview?

ML · Gradient boosting (XGBoost, LightGBM, CatBoost)

References & further reading