Day 27 of 133

Feature selection (filter / wrapper / embedded / SHAP)

Mutual information, L1, permutation, SHAP — fair attributions.

DSA · NeetCode Tries

  • Interview questions to prep

    1. Why a trie over a hash map — what queries does the trie make cheaper?
    2. What's the time/space trade-off vs storing all suffixes?
    3. How would you support deletion or wildcard matching efficiently?
  • Interview questions to prep

    1. Why a trie over a hash map — what queries does the trie make cheaper?
    2. What's the time/space trade-off vs storing all suffixes?
    3. How would you support deletion or wildcard matching efficiently?
  • Word Search IIDSA · Tries

    Interview questions to prep

    1. Why a trie over a hash map — what queries does the trie make cheaper?
    2. What's the time/space trade-off vs storing all suffixes?
    3. How would you support deletion or wildcard matching efficiently?

ML · Feature selection

  • Interview questions to prep

    1. Compare filter, wrapper, and embedded feature selection.
    2. When would mutual information beat chi-squared as a filter, and vice versa?
  • Embedded: L1, tree-based importanceTraditional MLscikit-learn

    Interview questions to prep

    1. Why is L1 effectively a feature selector?
    2. When would you trust tree-based importance over L1 selection?
  • Interview questions to prep

    1. How does SHAP compute fair attributions, and how does it relate to game theory?
    2. Where does SHAP mislead — what's a worked example of correlated features confusing SHAP?

References & further reading