Day 66 of 133

Cost & scaling (autoscaling, spot, distillation) + DSA 2-D DP

Unit-cost-per-prediction; KEDA scaling; spot training without losing work.

DSA · NeetCode 2-D DP

  • Unique PathsDSA · 2-D DP

    Interview questions to prep

    1. Closed-form via combinatorics: C(m+n-2, m-1). Why is it equivalent to the DP answer?
    2. How do obstacles change the recurrence (unique-paths-ii)?
  • Interview questions to prep

    1. State the DP. How does it relate to edit distance and to longest-common-substring?
    2. Space-optimize from O(m·n) to O(min(m, n)).

MLOps · Cost & scaling

  • Interview questions to prep

    1. How would you model the unit cost of a prediction in production?
    2. What levers reduce inference cost (batching, quantization, caching, distillation)?
  • Interview questions to prep

    1. Compare CPU-based HPA vs queue-based KEDA scaling for ML inference.
    2. Why does GPU-pinned inference often defeat HPA, and how do you actually scale GPU pods?
  • Interview questions to prep

    1. How would you train safely on spot instances (checkpointing, retries)?
    2. When does spot training become NET more expensive than on-demand — what's the breakeven?

References & further reading