Day 74 of 133

Prompt engineering + CoT + ReAct + DSA Greedy

Few-shot vs zero-shot; CoT; jailbreak defense.

DSA · NeetCode Greedy

  • Hand OF StraightsDSA · Greedy

    Interview questions to prep

    1. Prove the greedy choice — why is the locally-optimal pick safe globally? (Exchange argument or staying-ahead.)
    2. When does greedy fail on a similar-looking problem, and what would you reach for instead (DP, BFS)?
    3. Walk through edge cases that often break naive greedy: ties, negatives, single element.
  • Interview questions to prep

    1. Prove the greedy choice — why is the locally-optimal pick safe globally? (Exchange argument or staying-ahead.)
    2. When does greedy fail on a similar-looking problem, and what would you reach for instead (DP, BFS)?
    3. Walk through edge cases that often break naive greedy: ties, negatives, single element.

GenAI · Prompt engineering

  • Interview questions to prep

    1. When does few-shot help vs hurt?
    2. How do you design a system prompt for a customer-support agent?
    3. A prompt works in testing but fails randomly in production. How would you debug prompt drift?
    4. How do prompt templates, delimiters, schemas, and locked sampling settings improve reproducibility?
    5. What would a prompt regression test suite look like before deploying a new assistant prompt?
  • Chain-of-thought, ReAct, ToTGenerative AIWei et al.

    Interview questions to prep

    1. Walk through how chain-of-thought prompting changes performance on reasoning tasks.
    2. When does CoT hurt accuracy?
    3. Why can chain-of-thought improve complex reasoning but also amplify early mistakes?
    4. How would you prevent error compounding with stepwise verification, modular reasoning, or external checks?
    5. When would you use CoT, ReAct, ToT, self-consistency, or no explicit reasoning at all?
  • Prompt injection & jailbreaksGenerative AIAnthropic

    Interview questions to prep

    1. How would you defend a customer-facing LLM against prompt injection from user-supplied content?
    2. Why is 'just tell the model to ignore injection' insufficient — what's the actual defense-in-depth?

References & further reading