Day 37 of 133
Backpropagation & autograd + DSA Trees
Backprop as the chain rule on a computation graph. PyTorch autograd basics.
DSA · NeetCode Trees
- Balanced Binary TreeDSA · Trees
Interview questions to prep
- Compare BFS vs DFS for this problem — which fits, and what's the iterative version?
- What's the recursion's space cost on the stack, and how would you go iterative if you needed O(log n)?
- What's the relationship between this problem's invariant and the BST property (if any)?
- Same TreeDSA · Trees
Interview questions to prep
- Compare BFS vs DFS for this problem — which fits, and what's the iterative version?
- What's the recursion's space cost on the stack, and how would you go iterative if you needed O(log n)?
- What's the relationship between this problem's invariant and the BST property (if any)?
- Subtree OF Another TreeDSA · Trees
Interview questions to prep
- Compare BFS vs DFS for this problem — which fits, and what's the iterative version?
- What's the recursion's space cost on the stack, and how would you go iterative if you needed O(log n)?
- What's the relationship between this problem's invariant and the BST property (if any)?
DL · Backpropagation & autograd
Interview questions to prep
- Derive backprop for a 2-layer MLP with cross-entropy loss.
- Explain why automatic differentiation is reverse-mode for ML.
Interview questions to prep
- Why does a deep sigmoid network suffer vanishing gradients?
- How do residual connections, ReLU, and BN/LN help?
Interview questions to prep
- When would you use torch.no_grad() and detach()?
- What does requires_grad=True actually do under the hood?
- Explain the order loss.backward(), optimizer.step(), optimizer.zero_grad() in a PyTorch training loop.
References & further reading