What are the decision trees?

This is a type of supervised learning algorithm that is mostly used for classification problems. Surprisingly, it works for both categorical and continuous dependent variables.

In this algorithm, we split the population into two or more homogeneous sets. This is done based on most significant attributes/ independent variables to make as distinct groups as possible.

A decision tree is a flowchart-like tree structure, where each internal node (non-leaf node) denotes a test on an attribute, each branch represents an outcome of the test, and each leaf node (or terminal node) holds a value for the target variable.

Various techniques : like Gini, Information Gain, Chi-square, entropy.

How do we train decision trees?

  1. Start at the root node.
  2. For each variable X, find the set S_1 that minimizes the sum of the node impurities in the two child nodes and choose the split {X,S} that gives the minimum over all X and S.
  3. If a stopping criterion is reached, exit. Otherwise, apply step 2 to each child node in turn.

What are the main parameters of the decision tree model?

  • maximum tree depth
  • minimum samples per leaf node
  • impurity criterion

What are the benefits of a single decision tree compared to more complex models?

Often, we want to find a split such that it minimizes the sum of the node impurities. The impurity criterion is a parameter of decision trees. Popular methods to measure the impurity are the Gini impurity and the entropy describing the information gain.

Speak Your Mind