000 03759cam a22003855i 4500
003 RNL
005 20260330053635.0
008 161027s2016 gw |||| o |||| 0|eng
020 _a9783030429232
040 _aRCL
082 0 4 _aR 519.536 B48S
100 1 _aBerk, Richard A.
_930943
245 1 0 _aStatistical Learning from a Regression Perspective
_c/ Richard A. Berk.
250 _a3rd.
260 _aCham:
_bSpringer Cham,
_c2020.
300 _axxvi, 432p. ; 23cm.
_b36 b/w illustrations, 107 illustrations in colour
490 1 _aSpringer Texts in Statistics,
505 0 _aStatistical Learning as a Regression Problem -- Splines, Smoothers, and Kernels -- Classification and Regression Trees (CART) -- Bagging -- Random Forests -- Boosting -- Support Vector Machines -- Some Other Procedures Briefly -- Broader Implications and a Bit of Craft Lore.
520 _aThis textbook considers statistical learning applications when interest centers on the conditional distribution of the response variable, given a set of predictors, and when it is important to characterize how the predictors are related to the response. As a first approximation, this can be seen as an extension of nonparametric regression. This fully revised new edition includes important developments over the past 8 years. Consistent with modern data analytics, it emphasizes that a proper statistical learning data analysis derives from sound data collection, intelligent data management, appropriate statistical procedures, and an accessible interpretation of results. A continued emphasis on the implications for practice runs through the text. Among the statistical learning procedures examined are bagging, random forests, boosting, support vector machines and neural networks. Response variables may be quantitative or categorical. As in the first edition, a unifying theme is supervised learning that can be treated as a form of regression analysis. Key concepts and procedures are illustrated with real applications, especially those with practical implications. A principal instance is the need to explicitly take into account asymmetric costs in the fitting process. For example, in some situations false positives may be far less costly than false negatives. Also provided is helpful craft lore such as not automatically ceding data analysis decisions to a fitting algorithm. In many settings, subject-matter knowledge should trump formal fitting criteria. Yet another important message is to appreciate the limitation of one's data and not apply statistical learning procedures that require more than the data can provide. The material is written for upper undergraduate level and graduate students in the social and life sciences and for researchers who want to apply statistical learning procedures to scientific and policy problems. The author uses this book in a course on modern regression for the social, behavioral, and biological sciences. Intuitive explanations and visual representations are prominent. All of the analyses included are done in R with code routinely provided.
546 _aEnglish
650 0 _aProbabilities.
_930944
650 0 _aPsychological measurement.
_930945
650 0 _aPsychology-Methodology.
_930946
650 0 _aPublic health.
650 0 _aSocial sciences.
650 0 _aStatistics.
650 1 4 _aStatistical Theory and Methods.
_930939
650 2 4 _aMethodology of the Social Sciences.
_930948
650 2 4 _aProbability Theory and Stochastic Processes.
_930938
650 2 4 _aPsychological Methods/Evaluation.
_930949
650 2 4 _aPublic Health.
650 2 4 _aStatistics for Social Sciences, Humanities, Law.
_930951
856 _uhttps://link.springer.com/book/10.1007/978-3-030-40189-4
942 _cBK
999 _c48032
_d48032