Day 42 of 133
DL foundations consolidation + DSA Trees finish
Run 60-min concept self-quiz on weeks 6 so far; finalize Trees pattern.
DSA · NeetCode Trees
- Serialize And Deserialize Binary TreeDSA · Trees
Interview questions to prep
- Compare BFS vs DFS for this problem — which fits, and what's the iterative version?
- What's the recursion's space cost on the stack, and how would you go iterative if you needed O(log n)?
- What's the relationship between this problem's invariant and the BST property (if any)?
DL · Neural network foundations
Interview questions to prep
- Walk me through forward pass through a 2-layer MLP for binary classification.
- Why can't a single perceptron solve XOR — and how does adding a hidden layer fix it?
Interview questions to prep
- Compare ReLU, Leaky ReLU, GELU, and SwiGLU — when does each shine?
- Why did ReLU largely replace sigmoid/tanh in deep networks?
- What is the dying ReLU problem and how do you mitigate it?
Interview questions to prep
- Why does poor initialization cause vanishing or exploding gradients?
- Compare Xavier vs He initialization — which goes with which activation and why?
DL · Backpropagation & autograd
Interview questions to prep
- Derive backprop for a 2-layer MLP with cross-entropy loss.
- Explain why automatic differentiation is reverse-mode for ML.
Interview questions to prep
- Why does a deep sigmoid network suffer vanishing gradients?
- How do residual connections, ReLU, and BN/LN help?
Interview questions to prep
- When would you use torch.no_grad() and detach()?
- What does requires_grad=True actually do under the hood?
- Explain the order loss.backward(), optimizer.step(), optimizer.zero_grad() in a PyTorch training loop.
References & further reading