Assume we train a linear model to predict a numeric outcome.
A feature’s model weight would essentially quantify me how much the outcome variable increases for each increase in the predictor’s value.
I very recently stumbled upon partial dependence. If I understand it correctly, the partial dependence of a variable quantifies the influence of this variable on the dependent variable with all other variable marginalized out, i.e. all other things constant.
Am I right in thinking that partial dependence of an independent variable measures something very similar to a feature weight?
For the special case of an additive linear model this is true.
However, partial dependence profiles can also be used with more complex model structures such as boosted trees or linear models with complex interactions. Their usefulness is sometimes diminished by the ceteris paribus principle as you have pointed out correctly.
Answered by Michael M on November 12, 2021
1 Asked on August 5, 2020 by carlos-valenzuela
0 Asked on August 4, 2020 by m-smith
1 Asked on August 1, 2020 by steven-niggebrugge
1 Asked on July 31, 2020 by uared1776
1 Asked on July 30, 2020 by gabriel-ullmann
0 Asked on July 28, 2020 by gabriel
0 Asked on July 28, 2020 by christopher-u
0 Asked on July 27, 2020 by statsmonkey
Get help from others!