Day 36 of 133
NN basics: perceptron, MLP, activations + DSA Trees
Forward pass, why XOR needs hidden layers, ReLU vs sigmoid vs GELU.
DSA · NeetCode Trees
- Invert Binary TreeDSA · Trees
Interview questions to prep
- Show both iterative (BFS / DFS with stack) and recursive solutions.
- What's the space cost of the recursive version on a skewed tree?
- Maximum Depth OF Binary TreeDSA · Trees
Interview questions to prep
- Compare BFS vs DFS for this problem — which fits, and what's the iterative version?
- What's the recursion's space cost on the stack, and how would you go iterative if you needed O(log n)?
- What's the relationship between this problem's invariant and the BST property (if any)?
- Diameter OF Binary TreeDSA · Trees
Interview questions to prep
- Compare BFS vs DFS for this problem — which fits, and what's the iterative version?
- What's the recursion's space cost on the stack, and how would you go iterative if you needed O(log n)?
- What's the relationship between this problem's invariant and the BST property (if any)?
DL · Neural network foundations
Interview questions to prep
- Walk me through forward pass through a 2-layer MLP for binary classification.
- Why can't a single perceptron solve XOR — and how does adding a hidden layer fix it?
Interview questions to prep
- Compare ReLU, Leaky ReLU, GELU, and SwiGLU — when does each shine?
- Why did ReLU largely replace sigmoid/tanh in deep networks?
- What is the dying ReLU problem and how do you mitigate it?
Interview questions to prep
- Why does poor initialization cause vanishing or exploding gradients?
- Compare Xavier vs He initialization — which goes with which activation and why?
References & further reading