Day 69 of 133
Observability for ML systems + DSA 2-D DP
Logs / metrics / traces; LLM-specific dimensions; SLOs and alerts.
DSA · NeetCode 2-D DP
- Longest Increasing Path IN A MatrixDSA · 2-D DP
Interview questions to prep
- State the 2-D DP: indices, recurrence, base case. What's the order of fill?
- Can you reduce 2-D to 1-D by reusing rows or columns? Walk through the dependency direction.
- Top-down with memoization vs bottom-up — which is easier to reason about, and which is faster in practice?
- Distinct SubsequencesDSA · 2-D DP
Interview questions to prep
- State the 2-D DP: indices, recurrence, base case. What's the order of fill?
- Can you reduce 2-D to 1-D by reusing rows or columns? Walk through the dependency direction.
- Top-down with memoization vs bottom-up — which is easier to reason about, and which is faster in practice?
MLOps · Observability for ML
Interview questions to prep
- What's the difference between logs, metrics, and traces, and what does each tell you?
- How would you trace a single user request through retrieval → LLM → tool calls and back?
Interview questions to prep
- What dimensions do you slice LLM observability by (model, prompt, user, tool)?
- How would you detect prompt regressions on production traffic without leaking PII to humans?
Interview questions to prep
- What's a meaningful SLO for an ML inference service?
- How do you avoid alert fatigue from noisy ML metrics — what's a sane page-worthy threshold?
References & further reading
- LangSmith — LLM tracing & eval ↗LangChain
- Arize AI — ML observability ↗Arize
- Eugene Yan — applied ML writing ↗Eugene Yan