Day 37 of 133

Backpropagation & autograd + DSA Trees

Backprop as the chain rule on a computation graph. PyTorch autograd basics.

DSA · NeetCode Trees

  • Interview questions to prep

    1. Compare BFS vs DFS for this problem — which fits, and what's the iterative version?
    2. What's the recursion's space cost on the stack, and how would you go iterative if you needed O(log n)?
    3. What's the relationship between this problem's invariant and the BST property (if any)?
  • Same TreeDSA · Trees

    Interview questions to prep

    1. Compare BFS vs DFS for this problem — which fits, and what's the iterative version?
    2. What's the recursion's space cost on the stack, and how would you go iterative if you needed O(log n)?
    3. What's the relationship between this problem's invariant and the BST property (if any)?
  • Interview questions to prep

    1. Compare BFS vs DFS for this problem — which fits, and what's the iterative version?
    2. What's the recursion's space cost on the stack, and how would you go iterative if you needed O(log n)?
    3. What's the relationship between this problem's invariant and the BST property (if any)?

DL · Backpropagation & autograd

  • Backprop on a computation graphDeep LearningKarpathy

    Interview questions to prep

    1. Derive backprop for a 2-layer MLP with cross-entropy loss.
    2. Explain why automatic differentiation is reverse-mode for ML.
  • Interview questions to prep

    1. Why does a deep sigmoid network suffer vanishing gradients?
    2. How do residual connections, ReLU, and BN/LN help?
  • PyTorch autograd in 30 minutesDeep LearningPyTorch

    Interview questions to prep

    1. When would you use torch.no_grad() and detach()?
    2. What does requires_grad=True actually do under the hood?
    3. Explain the order loss.backward(), optimizer.step(), optimizer.zero_grad() in a PyTorch training loop.

References & further reading