Day 62 of 133
Deployment patterns: batch / real-time / streaming / edge + DSA 1-D DP
Latency budgets, async inference, when to push to device.
DSA · NeetCode 1-D DP
- Longest Palindromic SubstringDSA · 1-D DP
Interview questions to prep
- Compare expand-around-center (O(n²) time, O(1) space) vs Manacher's (O(n)).
- Why is DP O(n²) time AND O(n²) space — and does it actually beat expand-around-center in practice?
- Palindromic SubstringsDSA · 1-D DP
Interview questions to prep
- State the DP: define the state, the transition, and the base case explicitly.
- Top-down (memoized recursion) vs bottom-up (tabulation) — which is more natural here, and why?
- Can you space-optimize from O(n) to O(1)? Show the rolling-window trick.
MLOps · Deployment patterns
Interview questions to prep
- Compare batch, real-time, and streaming inference — when do you reach for each?
- What's the latency budget for an ad CTR model vs a churn model?
Interview questions to prep
- When does on-device ML beat cloud inference, and what are the constraints?
- What model-side techniques (quantization, pruning, distillation) actually move the needle for mobile?
Interview questions to prep
- When would you use async inference behind a queue?
- How do you size a queue worker pool for spiky inference traffic without over-provisioning?
References & further reading