Day 69 of 133

Observability for ML systems + DSA 2-D DP

Logs / metrics / traces; LLM-specific dimensions; SLOs and alerts.

DSA · NeetCode 2-D DP

  • Interview questions to prep

    1. State the 2-D DP: indices, recurrence, base case. What's the order of fill?
    2. Can you reduce 2-D to 1-D by reusing rows or columns? Walk through the dependency direction.
    3. Top-down with memoization vs bottom-up — which is easier to reason about, and which is faster in practice?
  • Interview questions to prep

    1. State the 2-D DP: indices, recurrence, base case. What's the order of fill?
    2. Can you reduce 2-D to 1-D by reusing rows or columns? Walk through the dependency direction.
    3. Top-down with memoization vs bottom-up — which is easier to reason about, and which is faster in practice?

MLOps · Observability for ML

  • Interview questions to prep

    1. What's the difference between logs, metrics, and traces, and what does each tell you?
    2. How would you trace a single user request through retrieval → LLM → tool calls and back?
  • Interview questions to prep

    1. What dimensions do you slice LLM observability by (model, prompt, user, tool)?
    2. How would you detect prompt regressions on production traffic without leaking PII to humans?
  • Interview questions to prep

    1. What's a meaningful SLO for an ML inference service?
    2. How do you avoid alert fatigue from noisy ML metrics — what's a sane page-worthy threshold?

References & further reading