Version 1.7#

For a short description of the main highlights of the release, please refer to Release Highlights for scikit-learn 1.7.

Legend for changelogs

  • Major Feature something big that you couldn’t do before.

  • Feature something that you couldn’t do before.

  • Efficiency an existing feature now may not require as much computation or memory.

  • Enhancement a miscellaneous minor improvement.

  • Fix something that previously didn’t work as documented – or according to reasonable expectations – should now work.

  • API Change you will need to change your code to have the same effect in the future; or a feature will be removed in the future.

Version 1.7.0#

June 2025

Changed models#

  • Fix Change the ConvergenceWarning message of estimators that rely on the "lbfgs" optimizer internally to be more informative and to avoid suggesting to increase the maximum number of iterations when it is not user-settable or when the convergence problem happens before reaching it. By Olivier Grisel. #31316

Changes impacting many modules#

  • Sparse update: As part of the SciPy change from spmatrix to sparray, all internal use of sparse now supports both sparray and spmatrix. All manipulations of sparse objects should work for either spmatrix or sparray. This is pass 1 of a migration toward sparray (see SciPy migration to sparray By Dan Schult #30858

Support for Array API#

Additional estimators and functions have been updated to include support for all Array API compliant inputs.

See Array API support (experimental) for more details.

Metadata routing#

Refer to the Metadata Routing User Guide for more details.

sklearn.base#

sklearn.calibration#

sklearn.compose#

sklearn.covariance#

  • Fix Support for n_samples == n_features in sklearn.covariance.MinCovDet has been restored. By Antony Lee. #30483

sklearn.datasets#

sklearn.decomposition#

sklearn.ensemble#

sklearn.feature_selection#

sklearn.gaussian_process#

sklearn.inspection#

sklearn.linear_model#

sklearn.manifold#

  • Enhancement manifold.MDS will switch to use n_init=1 by default, starting from version 1.9. By Dmitry Kobak #31117

  • Fix manifold.MDS now correctly handles non-metric MDS. Furthermore, the returned stress value now corresponds to the returned embedding and normalized stress is now allowed for metric MDS. By Dmitry Kobak #30514

  • Fix manifold.MDS now uses eps=1e-6 by default and the convergence criterion was adjusted to make sense for both metric and non-metric MDS and to follow the reference R implementation. The formula for normalized stress was adjusted to follow the original definition by Kruskal. By Dmitry Kobak #31117

sklearn.metrics#

sklearn.mixture#

  • Feature Added an attribute lower_bounds_ in the mixture.BaseMixture class to save the list of lower bounds for each iteration thereby providing insights into the convergence behavior of mixture models like mixture.GaussianMixture. By Manideep Yenugula #28559

  • Efficiency Simplified redundant computation when estimating covariances in GaussianMixture with a covariance_type="spherical" or covariance_type="diag". By Leonce Mekinda and Olivier Grisel #30414

  • Efficiency GaussianMixture now consistently operates at float32 precision when fitted with float32 data to improve training speed and memory efficiency. Previously, part of the computation would be implicitly cast to float64. By Olivier Grisel and Omar Salman. #30415

sklearn.model_selection#

sklearn.multiclass#

sklearn.multioutput#

sklearn.neural_network#

sklearn.pipeline#

sklearn.preprocessing#

sklearn.svm#

sklearn.utils#

Code and documentation contributors

Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1.6, including:

4hm3d, Aaron Schumacher, Abhijeetsingh Meena, Acciaro Gennaro Daniele, Achraf Tasfaout, Adrien Linares, Adrin Jalali, Agriya Khetarpal, Aiden Frank, Aitsaid Azzedine Idir, ajay-sentry, Akanksha Mhadolkar, Alfredo Saucedo, Anderson Chaves, Andres Guzman-Ballen, Aniruddha Saha, antoinebaker, Antony Lee, Arjun S, ArthurDbrn, Arturo, Arturo Amor, ash, Ashton Powell, ayoub.agouzoul, Bagus Tris Atmaja, Benjamin Danek, Boney Patel, Camille Troillard, Chems Ben, Christian Lorentzen, Christian Veenhuis, Christine P. Chai, claudio, Code_Blooded, Colas, Colin Coe, Connor Lane, Corey Farwell, Daniel Agyapong, Dan Schult, Dea María Léon, Deepak Saldanha, dependabot[bot], Dimitri Papadopoulos Orfanos, Dmitry Kobak, Domenico, Elham Babaei, emelia-hdz, EmilyXinyi, Emma Carballal, Eric Larson, fabianhenning, Gael Varoquaux, Gil Ramot, Gordon Grey, Goutam, G Sreeja, Guillaume Lemaitre, Haesun Park, Hanjun Kim, Helder Geovane Gomes de Lima, Henri Bonamy, Hleb Levitski, Hugo Boulenger, IlyaSolomatin, Irene, Jérémie du Boisberranger, Jérôme Dockès, JoaoRodriguesIST, Joel Nothman, Josh, Kevin Klein, Loic Esteve, Lucas Colley, Luc Rocher, Lucy Liu, Luis M. B. Varona, lunovian, Mamduh Zabidi, Marc Bresson, Marco Edward Gorelli, Marco Maggi, Maren Westermann, Marie Sacksick, Martin Jurča, Miguel González Duque, Mihir Waknis, Mohamed Ali SRIR, Mohamed DHIFALLAH, mohammed benyamna, Mohit Singh Thakur, Mounir Lbath, myenugula, Natalia Mokeeva, Olivier Grisel, omahs, Omar Salman, Pedro Lopes, Pedro Olivares, Preyas Shah, Radovenchyk, Rahil Parikh, Rémi Flamary, Reshama Shaikh, Rishab Saini, rolandrmgservices, SanchitD, Santiago Castro, Santiago Víquez, scikit-learn-bot, Scott Huberty, Shruti Nath, Siddharth Bansal, Simarjot Sidhu, Sortofamudkip, sotagg, Sourabh Kumar, Stefan, Stefanie Senger, Stefano Gaspari, Stephen Pardy, Success Moses, Sylvain Combettes, Tahar Allouche, Thomas J. Fan, Thomas Li, ThorbenMaa, Tim Head, Umberto Fasci, UV, Vasco Pereira, Vassilis Margonis, Velislav Babatchev, Victoria Shevchenko, viktor765, Vipsa Kamani, Virgil Chan, vpz, Xiao Yuan, Yaich Mohamed, Yair Shimony, Yao Xiao, Yaroslav Halchenko, Yulia Vilensky, Yuvi Panda