Day 22 of 133
Gradient boosting (XGBoost, LightGBM, CatBoost) + DSA Linked List
Why GBDT beats RF on tabular. XGBoost tricks. Pick one and tune.
DSA · NeetCode Linked List
- Reorder ListDSA · Linked List
Interview questions to prep
- Walk through your pointer hazards — what breaks if you lose track of the head or a prev pointer?
- Can you do this in-place (O(1) extra space)? What's the trick?
- How would you detect / handle a cycle, and prove your method's correctness?
- Remove Nth Node From End OF ListDSA · Linked List
Interview questions to prep
- Walk through your pointer hazards — what breaks if you lose track of the head or a prev pointer?
- Can you do this in-place (O(1) extra space)? What's the trick?
- How would you detect / handle a cycle, and prove your method's correctness?
ML · Gradient boosting (XGBoost, LightGBM, CatBoost)
Interview questions to prep
- Walk me through how GBM fits each new tree on the negative gradient of the loss.
- Why is GBM more sensitive to learning rate than Random Forest?
Interview questions to prep
- What does XGBoost do differently from vanilla GBM that made it dominate Kaggle?
- How does XGBoost handle missing values automatically?
Interview questions to prep
- When would you pick LightGBM over XGBoost?
- What problem does CatBoost's ordered boosting solve?
References & further reading
- XGBoost docs (parameters + tuning) ↗XGBoost
- LightGBM docs ↗Microsoft
- StatQuest — Statistics & ML playlists ↗YouTube