Day 42 of 133

DL foundations consolidation + DSA Trees finish

Run 60-min concept self-quiz on weeks 6 so far; finalize Trees pattern.

DSA · NeetCode Trees

  • Interview questions to prep

    1. Compare BFS vs DFS for this problem — which fits, and what's the iterative version?
    2. What's the recursion's space cost on the stack, and how would you go iterative if you needed O(log n)?
    3. What's the relationship between this problem's invariant and the BST property (if any)?

DL · Neural network foundations

  • Interview questions to prep

    1. Walk me through forward pass through a 2-layer MLP for binary classification.
    2. Why can't a single perceptron solve XOR — and how does adding a hidden layer fix it?
  • Interview questions to prep

    1. Compare ReLU, Leaky ReLU, GELU, and SwiGLU — when does each shine?
    2. Why did ReLU largely replace sigmoid/tanh in deep networks?
    3. What is the dying ReLU problem and how do you mitigate it?
  • Interview questions to prep

    1. Why does poor initialization cause vanishing or exploding gradients?
    2. Compare Xavier vs He initialization — which goes with which activation and why?

DL · Backpropagation & autograd

  • Backprop on a computation graphDeep LearningKarpathy

    Interview questions to prep

    1. Derive backprop for a 2-layer MLP with cross-entropy loss.
    2. Explain why automatic differentiation is reverse-mode for ML.
  • Interview questions to prep

    1. Why does a deep sigmoid network suffer vanishing gradients?
    2. How do residual connections, ReLU, and BN/LN help?
  • PyTorch autograd in 30 minutesDeep LearningPyTorch

    Interview questions to prep

    1. When would you use torch.no_grad() and detach()?
    2. What does requires_grad=True actually do under the hood?
    3. Explain the order loss.backward(), optimizer.step(), optimizer.zero_grad() in a PyTorch training loop.

References & further reading