Day 16 of 133

Logistic regression + classification metrics + DSA Stack

Cross-entropy, ROC vs PR, calibration, threshold selection.

DSA · NeetCode Stack

  • Interview questions to prep

    1. Why a stack here — what LIFO property does the problem exploit?
    2. If this uses a monotonic stack, state the monotonic invariant and how it's restored on each push.
    3. Walk through complexity: each element is pushed and popped at most once, so the total work is O(n).
  • Daily TemperaturesDSA · Stack

    Interview questions to prep

    1. What's the monotonic-stack invariant, and how does each pop give you the answer?
    2. Why is the total work O(n) when the inner loop looks O(n²)?

ML · Logistic regression & classification

  • Sigmoid, log-odds, and the LR lossTraditional MLStatQuest

    Interview questions to prep

    1. Why do we use the log-loss (cross-entropy) instead of MSE for logistic regression?
    2. Derive the gradient of binary cross-entropy w.r.t. the weights.
  • Interview questions to prep

    1. When is ROC-AUC misleading? When should you use PR-AUC instead?
    2. What is a calibration plot and why does it matter for downstream decisions?
    3. How do you choose a classification threshold for a fraud-detection system?
  • Interview questions to prep

    1. Compare one-vs-rest vs softmax for multi-class classification.
    2. What's the difference between multi-class and multi-label, and how does the loss change?
  • Interview questions to prep

    1. Why is logistic regression called regression if it solves classification?
    2. How do you interpret a logistic-regression coefficient as an odds ratio?
    3. When do you use sigmoid outputs vs softmax outputs, and how does that change for multi-label problems?

References & further reading