.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/ensemble/plot_forest_importances.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_ensemble_plot_forest_importances.py: ========================================== Feature importances with a forest of trees ========================================== This example shows the use of a forest of trees to evaluate the importance of features on an artificial classification task. The blue bars are the feature importances of the forest, along with their inter-trees variability represented by the error bars. As expected, the plot suggests that 3 features are informative, while the remaining are not. .. GENERATED FROM PYTHON SOURCE LINES 15-21 .. code-block:: Python # Authors: The scikit-learn developers # SPDX-License-Identifier: BSD-3-Clause import matplotlib.pyplot as plt .. GENERATED FROM PYTHON SOURCE LINES 22-28 Data generation and model fitting --------------------------------- We generate a synthetic dataset with only 3 informative features. We will explicitly not shuffle the dataset to ensure that the informative features will correspond to the three first columns of X. In addition, we will split our dataset into training and testing subsets. .. GENERATED FROM PYTHON SOURCE LINES 28-43 .. code-block:: Python from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split X, y = make_classification( n_samples=1000, n_features=10, n_informative=3, n_redundant=0, n_repeated=0, n_classes=2, random_state=0, shuffle=False, ) X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42) .. GENERATED FROM PYTHON SOURCE LINES 44-45 A random forest classifier will be fitted to compute the feature importances. .. GENERATED FROM PYTHON SOURCE LINES 45-51 .. code-block:: Python from sklearn.ensemble import RandomForestClassifier feature_names = [f"feature {i}" for i in range(X.shape[1])] forest = RandomForestClassifier(random_state=0) forest.fit(X_train, y_train) .. raw:: html
RandomForestClassifier(random_state=0)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 52-62 Feature importance based on mean decrease in impurity ----------------------------------------------------- Feature importances are provided by the fitted attribute `feature_importances_` and they are computed as the mean and standard deviation of accumulation of the impurity decrease within each tree. .. warning:: Impurity-based feature importances can be misleading for **high cardinality** features (many unique values). See :ref:`permutation_importance` as an alternative below. .. GENERATED FROM PYTHON SOURCE LINES 62-73 .. code-block:: Python import time import numpy as np start_time = time.time() importances = forest.feature_importances_ std = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0) elapsed_time = time.time() - start_time print(f"Elapsed time to compute the importances: {elapsed_time:.3f} seconds") .. rst-class:: sphx-glr-script-out .. code-block:: none Elapsed time to compute the importances: 0.005 seconds .. GENERATED FROM PYTHON SOURCE LINES 74-75 Let's plot the impurity-based importance. .. GENERATED FROM PYTHON SOURCE LINES 75-85 .. code-block:: Python import pandas as pd forest_importances = pd.Series(importances, index=feature_names) fig, ax = plt.subplots() forest_importances.plot.bar(yerr=std, ax=ax) ax.set_title("Feature importances using MDI") ax.set_ylabel("Mean decrease in impurity") fig.tight_layout() .. image-sg:: /auto_examples/ensemble/images/sphx_glr_plot_forest_importances_001.png :alt: Feature importances using MDI :srcset: /auto_examples/ensemble/images/sphx_glr_plot_forest_importances_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 86-93 We observe that, as expected, the three first features are found important. Feature importance based on feature permutation ----------------------------------------------- Permutation feature importance overcomes limitations of the impurity-based feature importance: they do not have a bias toward high-cardinality features and can be computed on a left-out test set. .. GENERATED FROM PYTHON SOURCE LINES 93-104 .. code-block:: Python from sklearn.inspection import permutation_importance start_time = time.time() result = permutation_importance( forest, X_test, y_test, n_repeats=10, random_state=42, n_jobs=2 ) elapsed_time = time.time() - start_time print(f"Elapsed time to compute the importances: {elapsed_time:.3f} seconds") forest_importances = pd.Series(result.importances_mean, index=feature_names) .. rst-class:: sphx-glr-script-out .. code-block:: none Elapsed time to compute the importances: 1.479 seconds .. GENERATED FROM PYTHON SOURCE LINES 105-109 The computation for full permutation importance is more costly. Features are shuffled n times and the model refitted to estimate the importance of it. Please see :ref:`permutation_importance` for more details. We can now plot the importance ranking. .. GENERATED FROM PYTHON SOURCE LINES 109-117 .. code-block:: Python fig, ax = plt.subplots() forest_importances.plot.bar(yerr=result.importances_std, ax=ax) ax.set_title("Feature importances using permutation on full model") ax.set_ylabel("Mean accuracy decrease") fig.tight_layout() plt.show() .. image-sg:: /auto_examples/ensemble/images/sphx_glr_plot_forest_importances_002.png :alt: Feature importances using permutation on full model :srcset: /auto_examples/ensemble/images/sphx_glr_plot_forest_importances_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 118-121 The same features are detected as most important using both methods. Although the relative importances vary. As seen on the plots, MDI is less likely than permutation importance to fully omit a feature. .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 1.929 seconds) .. _sphx_glr_download_auto_examples_ensemble_plot_forest_importances.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/scikit-learn/scikit-learn/main?urlpath=lab/tree/notebooks/auto_examples/ensemble/plot_forest_importances.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_forest_importances.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_forest_importances.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_forest_importances.zip ` .. include:: plot_forest_importances.recommendations .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_