Day 36 of 133

NN basics: perceptron, MLP, activations + DSA Trees

Forward pass, why XOR needs hidden layers, ReLU vs sigmoid vs GELU.

DSA · NeetCode Trees

  • Invert Binary TreeDSA · Trees

    Interview questions to prep

    1. Show both iterative (BFS / DFS with stack) and recursive solutions.
    2. What's the space cost of the recursive version on a skewed tree?
  • Interview questions to prep

    1. Compare BFS vs DFS for this problem — which fits, and what's the iterative version?
    2. What's the recursion's space cost on the stack, and how would you go iterative if you needed O(log n)?
    3. What's the relationship between this problem's invariant and the BST property (if any)?
  • Interview questions to prep

    1. Compare BFS vs DFS for this problem — which fits, and what's the iterative version?
    2. What's the recursion's space cost on the stack, and how would you go iterative if you needed O(log n)?
    3. What's the relationship between this problem's invariant and the BST property (if any)?

DL · Neural network foundations

  • Interview questions to prep

    1. Walk me through forward pass through a 2-layer MLP for binary classification.
    2. Why can't a single perceptron solve XOR — and how does adding a hidden layer fix it?
  • Interview questions to prep

    1. Compare ReLU, Leaky ReLU, GELU, and SwiGLU — when does each shine?
    2. Why did ReLU largely replace sigmoid/tanh in deep networks?
    3. What is the dying ReLU problem and how do you mitigate it?
  • Interview questions to prep

    1. Why does poor initialization cause vanishing or exploding gradients?
    2. Compare Xavier vs He initialization — which goes with which activation and why?

References & further reading