Driver.dvi

Transcript

1 STS Springer Texts in Statistics Springer Texts in Statistics James · Witten · Hastie · Tibshirani Gareth James · Daniela Witten · Trevor Hastie · Robert Tibshirani An Introduction to Statistical Learning with Applications in R Gareth James eld provides an accessible overview of the fi An Introduction to Statistical Learning t for making sense of the vast and complex of statistical learning, an essential toolse Daniela Witten nance to marketing to elds ranging from biology to fi data sets that have emerged in fi astrophysics in the past twenty years. Th is book presents some of the most important Trevor Hastie modeling and prediction techniques, along with relevant applications. Topics include cation, resampling methods, shrinkage approaches, tree-based linear regression, classifi Robert Tibshirani methods, support vector machines, clustering, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in sci- ence, industry, and other fi elds, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical ware platform. soft An Introduction Th (Hastie, Tibshirani Two of the authors co-wrote e Elements of Statistical Learning 1 and Friedman, 2nd edition 2009), a popular reference book for statistics and machine learning researchers. An Introduction to Statistical Learning covers many of the same is book is targeted at topics, but at a level accessible to a much broader audience. Th An Introduction to Statistical Learning wish to use cutting-edge statistical learn- statisticians and non-statisticians alike who to Statistical ing techniques to analyze their data. Th e text assumes only a previous course in linear regression and no knowledge of matrix algebra. Gareth James is a professor of statistics at University of Southern California. He has work in the domain of statistical learn- published an extensive body of methodological Learning ing with particular emphasis on high-dimensional and functional data. Th e conceptual framework for this book grew out of his MBA elective courses in this area. Daniela Witten is an assistant professor of biostatistics at University of Washington. Her research focuses largely on high-dimensional statistical machine learning. She has with Applications in R contributed to the translation of statistical learning techniques to the fi eld of genomics, through collaborations and as a member of the Institute of Medicine committee that led to the report Evolution of Translational Omics . Trevor Hastie and Robert Tibshirani are professors of statistics at Stanford University, and are co-authors of the successful textbook Elements of Statistical Learning. Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling soft ware and environment in R/S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap . Statistics ISBN 978-1-4614-7137-0 9 781461 471370

2 Springer Texts in Statistics 103 Series Editors: G. Casella S. Fienberg I. Olkin For further volumes: http://www.springer.com/series/417

3

4 • • Gareth James Daniela Witten Trevor Hastie Robert Tibshirani An Introduction to Statistical Learning with Applications in R 123

5 Daniela Witten Gareth James Department of Biostatistics Department of Information and University of Washington Operations Management Seattle, WA, USA University of Southern California Los Angeles, CA, USA Trevor Hastie Robert Tibshirani Department of Statistics Department of Statistics Stanford University Stanford University Stanford, CA, USA Stanford, CA, USA ISSN 1431-875X ISBN 978-1-4614-7138-7 (eBook) ISBN 978-1-4614-7137-0 DOI 10.1007/978-1-4614-7138-7 Springer New York Heidelberg Dordrecht London Library of Congress Control Number: 2013936251 © Springer Science+Business Media New York 2013 This work is subject to copyright. All rights are rese rved by the Publisher, whether the whole or part of the material is concerned, specifically the rights o f translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduc tion on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissim- ilar methodology now known or hereafter developed. Ex empted from this legal reservation are brief ysis or material supplied specifically for the pur- excerpts in connection with reviews or scholarly anal pose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts ther eof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always issions for use may be obtained t hrough RightsLink at the Copyright be obtained from Springer. Perm Clearance Center. Violations are liable to pro secution under the respective Copyright Law. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publi- cation does not imply, even in the absence of a specific statement, that such names are exempt from tions and therefore fre e for general use. the relevant protective laws and regula While the advice and information in this book are b elieved to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Printed on acid-free paper Springer is part of Springer Science+ Business Media (www.springer.com)

6 To our parents: Alison and Michael James Chiara Nappi and Edward Witten Valerie and Patrick Hastie Vera and Sami Tibshirani and to our families: Michael, Daniel, and Catherine Ari Samantha, Timothy, and Lynda Charlie, Ryan, Julie, and Cheryl

7

8 Preface Statistical learning refers to a set of tools for modeling and understanding eloped area in statistics and blends complex datasets. It is a recently dev with parallel developments in computer science and, in particular, machine learning. The field encompasses many methods such as the lasso and sparse regression, classification and regression trees, and boosting and support vector machines. With the explosion of “Big Data” problems, statistical learning has be- come a very hot field in many scientific areas as well as marketing, finance, and other business disciplines. People with statistical learning skills are in high demand. One of the first books in this area— The Elements of Statistical Learning (ESL) (Hastie, Tibshirani, and Friedman)—was published in 2001, with a second edition in 2009. ESL has become a popular text not only in statis- tics but also in related fields. One of the reasons for ESL’s popularity is its relatively accessible style. But ESL is intended for individuals with ad- An Introduction to Statistical vanced training in the mathematical sciences. Learning (ISL) arose from the perceived need for a broader and less tech- nical treatment of these topics. In this new book, we cover many of the same topics as ESL, but we concentrate more on the applications of the methods and less on the mathematical details. We have created labs illus- trating how to implement each of the statistical learning methods using the R . These labs provide the reader with popular statistical software package valuable hands-on experience. This book is appropriate for advanced undergraduates or master’s stu- dents in statistics or related quantitative fields or for individuals in other vii

9 viii Preface disciplines who wish to use statistical learning tools to analyze their data. It can be used as a textbook for a course spanning one or two semesters. We would like to thank several readers for valuable comments on prelim- inary drafts of this book: Pallavi Basu, Alexandra Chouldechova, Patrick Danaher, Will Fithian, Luella Fu, Sam Gross, Max Grazier G’Sell, Court- ney Paulson, Xinghao Qiao, Elisa Sheng, Noah Simon, Kean Ming Tan, and Xin Lu Tan. It’s tough to make predictions, especially about the future. -Yogi Berra Los Angeles, USA Gareth James Seattle, USA Daniela Witten Palo Alto, USA Trevor Hastie Palo Alto, USA Robert Tibshirani

10 Contents Preface vii 1 1 Introduction 15 2 Statistical Learning 2.1 WhatIsStatisticalLearning?... 15 2.1.1 Why Estimate ?... 17 f 2.1.2 How Do We Estimate f ? ... 21 2.1.3 The Trade-Off Betw een Prediction Accuracy and Model Interpretability . . . . . . . . . . . . . . 24 2.1.4 Supervised Versus Unsupervised Learning . . . . . . 26 2.1.5 Regression Versus Classification Problems . . . . . . 28 2.2 AssessingModelAccuracy... 29 2.2.1 MeasuringtheQualityofFit ... 29 2.2.2 TheBias-VarianceTrade-Off ... 33 2.2.3 TheClassificationSetting... 37 2.3 Lab:IntroductiontoR... 42 2.3.1 BasicCommands... 42 2.3.2 Graphics ... 45 2.3.3 IndexingData ... 47 2.3.4 LoadingData... 48 2.3.5 Additional Graphical and Numerical Summaries . . 49 2.4 Exercises ... 52 ix

11 xContents 3 Linear Regression 59 3.1 SimpleLinearRegression ... 61 3.1.1 EstimatingtheCoefficients ... 61 3.1.2 Assessing the Accuracy of the Coefficient Estimates... 63 3.1.3 AssessingtheAccuracyoftheModel... 68 3.2 MultipleLinearRegression ... 71 3.2.1 Estimating the Regression Coefficients . . . . . . . . 72 3.2.2 Some Important Questions . . . . . . . . . . . . . . 75 3.3 Other Considerations in the Regression Model . . . . . . . . 82 3.3.1 QualitativePredictors... 82 3.3.2 ExtensionsoftheLinearModel... 86 3.3.3 PotentialProblems... 92 3.4 TheMarketingPlan ... 102 3.5 Comparison of Linear Regression with -Nearest K Neighbors... 104 3.6 Lab:LinearRegression... 109 3.6.1 Libraries... 109 3.6.2 SimpleLinearRegression ... 110 3.6.3 MultipleLinearRegression ... 113 3.6.4 InteractionTerms ... 115 3.6.5 Non-linear Transformations of the Predictors . . . . 115 3.6.6 QualitativePredictors... 117 3.6.7 WritingFunctions ... 119 3.7 Exercises ... 120 4 Classification 127 4.1 AnOverviewofClassification... 128 4.2 WhyNotLinearRegression? ... 129 4.3 LogisticRegression... 130 4.3.1 TheLogisticModel... 131 4.3.2 Estimating the Regression Coefficients . . . . . . . . 133 4.3.3 MakingPredictions... 134 4.3.4 MultipleLogisticRegression... 135 4.3.5 Logistic Regression for > 2ResponseClasses... 137 4.4 LinearDiscriminantAnalysis ... 138 4.4.1 Using Bayes’ Theorem for Classification . . . . . . . 138 p =1... 139 4.4.2 Linear Discriminant Analysis for 4.4.3 Linear Discriminant Analysis for p> 1 ... 142 4.4.4 QuadraticDiscriminantAnalysis... 149 4.5 A Comparison of Classification Methods . . . . . . . . . . . 151 4.6 Lab: Logistic Regression, LDA, QDA, and KNN . . . . . . 154 4.6.1 TheStockMarketData ... 154 4.6.2 LogisticRegression... 156 4.6.3 LinearDiscriminantAnalysis ... 161

12 Contents xi 4.6.4 QuadraticDiscriminantAnalysis... 162 4.6.5 K -NearestNeighbors... 163 4.6.6 An Application to Caravan Insurance Data . . . . . 164 4.7 Exercises ... 168 5 Resampling Methods 175 5.1 Cross-Validation ... 176 5.1.1 TheValidationSetApproach... 176 5.1.2 Leave-One-Out Cross-Validation . . . . . . . . . . . 178 5.1.3 -FoldCross-Validation ... 181 k 5.1.4 Bias-Variance Trade-Off for k -Fold Cross-Validation ... 183 5.1.5 Cross-Validation on Classification Problems . . . . . 184 5.2 TheBootstrap ... 187 5.3 Lab:Cross-ValidationandtheBootstrap... 190 5.3.1 TheValidationSetApproach... 191 5.3.2 Leave-One-Out Cross-Validation . . . . . . . . . . . 192 5.3.3 k -FoldCross-Validation ... 193 5.3.4 TheBootstrap ... 194 5.4 Exercises ... 197 203 6 Linear Model Selection and Regularization 6.1 Subset Selection . . . . . . . . . . . . . . . . . . . . . . . . 205 6.1.1 Best Subset Selection . . . . . . . . . . . . . . . . . 205 6.1.2 StepwiseSelection ... 207 6.1.3 ChoosingtheOptimalModel ... 210 6.2 ShrinkageMethods... 214 6.2.1 RidgeRegression... 215 6.2.2 TheLasso... 219 6.2.3 SelectingtheTuningParameter... 227 6.3 DimensionReductionMethods ... 228 6.3.1 Principal Components Regression . . . . . . . . . . . 230 6.3.2 Partial Least Squares . . . . . . . . . . . . . . . . . 237 6.4 ConsiderationsinHighDimensions... 238 6.4.1 High-DimensionalData ... 238 6.4.2 What Goes Wrong in High Dimensions? . . . . . . . 239 6.4.3 RegressioninHighDimensions ... 241 6.4.4 Interpreting Results in High Dimensions . . . . . . . 243 6.5 Lab 1: Subset Selection Methods . . . . . . . . . . . . . . . 244 6.5.1 Best Subset Selection . . . . . . . . . . . . . . . . . 244 6.5.2 Forward and Backward Stepwise Selection . . . . . . 247 6.5.3 Choosing Among Models Using the Validation SetApproachandCross-Validation... 248

13 xii Contents 6.6 Lab 2: Ridge Regression and the Lasso . . . . . . . . . . . . 251 6.6.1 RidgeRegression... 251 6.6.2 TheLasso... 255 6.7 Lab3:PCRandPLSRegression ... 256 6.7.1 Principal Components Regression . . . . . . . . . . . 256 6.7.2 Partial Least Squares . . . . . . . . . . . . . . . . . 258 6.8 Exercises ... 259 7 Moving Beyond Linearity 265 7.1 PolynomialRegression... 266 7.2 StepFunctions ... 268 7.3 BasisFunctions... 270 7.4 RegressionSplines ... 271 7.4.1 Piecewise Polynomials . . . . . . . . . . . . . . . . . 271 7.4.2 ConstraintsandSplines ... 271 7.4.3 TheSplineBasisRepresentation ... 273 7.4.4 Choosing the Number and Locations oftheKnots ... 274 7.4.5 Comparison to Polynomial Regression . . . . . . . . 276 7.5 SmoothingSplines ... 277 7.5.1 An Overview of Smoothing Splines . . . . . . . . . . 277 7.5.2 Choosing the Smoothing Parameter λ ... 278 7.6 LocalRegression ... 280 7.7 GeneralizedAdditiveModels ... 282 7.7.1 GAMs for Regression Problems . . . . . . . . . . . . 283 7.7.2 GAMs for Classification Problems . . . . . . . . . . 286 7.8 Lab:Non-linearModeling ... 287 7.8.1 Polynomial Regression and Step Functions . . . . . 288 7.8.2 Splines... 293 7.8.3 GAMs... 294 7.9 Exercises ... 297 8 Tree-Based Methods 303 8.1 TheBasicsofDecisionTrees ... 303 8.1.1 RegressionTrees ... 304 8.1.2 ClassificationTrees... 311 8.1.3 TreesVersusLinearModels... 314 8.1.4 Advantages and Disadvantages of Trees . . . . . . . 315 8.2 Bagging, Random Forests, Boosting . . . . . . . . . . . . . 316 8.2.1 Bagging . . . . . . . . . . . . . . . . . . . . . . . . . 316 8.2.2 RandomForests ... 320 8.2.3 Boosting... 321 8.3 Lab:DecisionTrees... 324 8.3.1 FittingClassificationTrees ... 324 8.3.2 FittingRegressionTrees... 327

14 Contents xiii 8.3.3 Bagging and Random Forests . . . . . . . . . . . . . 328 8.3.4 Boosting... 330 8.4 Exercises ... 332 9 Support Vector Machines 337 9.1 MaximalMarginClassifier... 338 9.1.1 WhatIsaHyperplane? ... 338 9.1.2 Classification Using a Separating Hyperplane . . . . 339 9.1.3 TheMaximalMarginClassifier... 341 9.1.4 Construction of the Maximal Margin Classifier . . . 342 9.1.5 TheNon-separableCase... 343 9.2 Support Vector Classifiers . . . . . . . . . . . . . . . . . . . 344 9.2.1 Overview of the Support Vector Classifier . . . . . . 344 9.2.2 Details of the Support Vector Classifier . . . . . . . 345 9.3 Support Vector Machines . . . . . . . . . . . . . . . . . . . 349 9.3.1 Classification with Non-linear Decision Boundaries . . . . . . . . . . . . . . . . . . . . . . . 349 9.3.2 The Support Vector Machine . . . . . . . . . . . . . 350 9.3.3 An Application to the Heart Disease Data . . . . . . 354 9.4 SVMswithMorethanTwoClasses... 355 9.4.1 One-Versus-OneClassification... 355 9.4.2 One-Versus-AllClassification ... 356 9.5 RelationshiptoLogisticRegression... 356 9.6 Lab: Support Vector Machines . . . . . . . . . . . . . . . . 359 9.6.1 Support Vector Classifier . . . . . . . . . . . . . . . 359 9.6.2 Support Vector Machine . . . . . . . . . . . . . . . . 363 9.6.3 ROCCurves ... 365 9.6.4 SVMwithMultipleClasses ... 366 9.6.5 Application to Gene Expression Data . . . . . . . . 366 9.7 Exercises ... 368 10 Unsupervised Learning 373 10.1 The Challenge of Unsupervised Learning . . . . . . . . . . . 373 10.2PrincipalComponentsAnalysis... 374 10.2.1 What Are Principal Components? . . . . . . . . . . 375 10.2.2 Another Interpretation of Principal Components . . 379 10.2.3MoreonPCA... 380 10.2.4 Other Uses for Principal Components . . . . . . . . 385 10.3ClusteringMethods... 385 10.3.1 K -MeansClustering ... 386 10.3.2HierarchicalClustering... 390 10.3.3PracticalIssuesinClustering ... 399 10.4Lab1:PrincipalComponentsAnalysis... 401

15 xiv Contents 10.5Lab2:Clustering... 404 10.5.1 K -MeansClustering ... 404 10.5.2HierarchicalClustering... 406 10.6Lab3:NCI60DataExample ... 407 10.6.1PCAontheNCI60Data ... 408 10.6.2 Clustering the Observations of the NCI60 Data . . . 410 10.7Exercises ... 413 Index 419

16 1 Introduction An Overview of Statistical Learning Statistical learning .These refers to a vast set of tools for understanding data supervised unsupervised . Broadly speaking, tools can be classified as or supervised statistical learning involves building a statistical model for pre- output basedononeormore inputs dicting, or estimating, an .Problemsof this nature occur in fields as diverse as business, medicine, astrophysics, and public policy. With unsupervised statistical learning, there are inputs but no supervising output; nevertheless we can learn relationships and struc- ture from such data. To provide an illustration of some applications of statistical learning, we briefly discuss three real-world data sets that are considered in this book. Wage Data Wage data set throughout this In this application (which we refer to as the book), we examine a number of factors that relate to wages for a group of males from the Atlantic region of the United States. In particular, we wish ,as age and education to understand the association between an employee’s well as the calendar ,onhis wage . Consider, for example, the left-hand year versus age for each of the individu- wage panel of Figure 1.1, which displays als in the data set. There is evidence that wage increases with age but then decreases again after approximately age 60. The blue line, which provides wage for a given age , makes this trend clearer. an estimate of the average G. James et al., 1 , An Introduction to Statistical Learning: with Applications in R 1, Springer Texts in Statistics 103, DOI 10.1007/978-1-4614-7138-7 © Springer Science+Business Media New York 2013

17 2 1. Introduction 300 300 300 200 200 200 Wage Wage Wage 50 100 50 100 50 100 12345 40 60 80 2006 2009 2003 20 Age Ye a r Education Level FIGURE 1.1. Wage data, which contains income survey information for males from the central Atlantic region of the United States. Left: wage as a function of . On average, wage increases with age age 60 years of age, at which until about point it begins to decline. wage as a function of year .Thereisaslow Center: $10 000 in the average wage between 2003 but steady increase of approximately , 2009 . Right: Boxplots displaying and as a function of education ,with 1 wage indicating the lowest level (no high school diploma) and 5 the highest level (an advanced graduate degree). On average, wage increases with the level of education. Givenanemployee’s age , we can use this curve to predict his wage . However, it is also clear from Figure 1.1 that there is a significant amount of vari- alone is unlikely to age ability associated with this average value, and so wage . provide an accurate prediction of a particular man’s We also have information regarding each employee’s education level and the year in which the wage was earned. The center and right-hand panels of wage as a function of both year ,in- education and Figure 1.1, which display dicate that both of these factors are associated with wage . Wages increase by approximately $10 , 000, in a roughly linear (or straight-line) fashion, between 2003 and 2009, though this rise is very slight relative to the vari- ability in the data. Wages are also typically greater for individuals with higher education levels: men with the lowest education level (1) tend to have substantially lower wages than those with the highest education level will be wage (5). Clearly, the most accurate prediction of a given man’s ,his education ,andthe year . In Chapter 3, age obtained by combining his we discuss linear regression, which can be used to predict wage from this data set. Ideally, we should predict wage in a way that accounts for the . In Chapter 7, we discuss a wage and age non-linear relationship between class of approaches for addressing this problem. Stock Market Data The Wage data involves predicting a continuous or quantitative output value. This is often referred to as a regression problem. However, in certain cases we may instead wish to predict a non-numerical value—that is, a categorical

18 1. Introduction 3 Yesterday Three Days Previous Two Days Previous 6 6 6 4 4 4 2 2 2 0 0 0 −2 −2 −2 Percentage change in S&P Percentage change in S&P Percentage change in S&P −4 −4 −4 Up Up Down Up Down Down Today’s Direction Today’s Direction Today’s Direction FIGURE 1.2. Left: Boxplots of the previous day’s percentage change in the S&P index for the days for which the market increased or decreased, obtained from the Smarket data. Center and Right: Same as left panel, but the percentage changes for 2 and 3 days previous are shown. or output. For example, in Chapter 4 we examine a stock mar- qualitative ket data set that contains the daily movements in the Standard & Poor’s 500 (S&P) stock index over a 5-year period between 2001 and 2005. We Smarket data. The goal is to predict whether the index refer to this as the increase or will on a given day using the past 5 days’ percentage decrease changes in the index. Here the statistical learning problem does not in- volve predicting a numerical value. Instead it involves predicting whether Up bucket or the a given day’s stock market performance will fall into the Down bucket. This is known as a classification problem. A model that could accurately predict the direction in which the market will move would be very useful! The left-hand panel of Figure 1.2 displays two boxplots of the previous day’s percentage changes in the stock index: one for the 648 days for which the market increased on the subsequent day, and one for the 602 days for which the market decreased. The two plots look almost identical, suggest- ing that there is no simple strategy for using yesterday’s movement in the S&P to predict today’s returns. The remaining panels, which display box- plots for the percentage changes 2 and 3 days previous to today, similarly indicate little association between past and present returns. Of course, this lack of pattern is to be expected: in the presence of strong correlations be- tween successive days’ returns, one could adopt a simple trading strategy to generate profits from the market. Nevertheless, in Chapter 4, we explore these data using several different statistical learning methods. Interestingly, there are hints of some weak trends in the data that suggest that, at least for this 5-year period, it is possible to correctly predict the direction of movement in the market approximately 60% of the time (Figure 1.3).

19 4 1. Introduction 0.52 0.50 0.48 Predicted Probability 0.46 Up Down Today’s Direction FIGURE 1.3. We fit a quadratic discriminant analysis model to the subset of the Smarket data corresponding to the 2001–2004 time period, and predicted the probability of a stock market decrease using the 2005 data. On average, the predicted probability of decrease is higher for the days in which the market does decrease. Based on these results, we are able to correctly predict the direction of movement in the market 60% of the time. Gene Expression Data The previous two applications illustrate data sets with both input and output variables. However, another important class of problems involves situations in which we only observe input variables, with no corresponding output. For example, in a marketing setting, we might have demographic information for a number of current or potential customers. We may wish to understand which types of customers are similar to each other by grouping individuals according to their observed characteristics. This is known as a clustering problem. Unlike in the previous examples, here we are not trying to predict an output variable. We devote Chapter 10 to a discussion of statistical learning methods for problems in which no natural output variable is available. We consider data set, which consists of 6 , 830 gene expression measurements NCI60 the for each of 64 cancer cell lines. Instead of predicting a particular output variable, we are interested in determining whether there are groups, or clusters, among the cell lines based on their gene expression measurements. This is a difficult question to address, in part because there are thousands cell line, making it hard to visualize of gene expression measurements per the data. The left-hand panel of Figure 1.4 addresses this problem by represent- .These and Z Z ing each of the 64 cell lines using just two numbers, 2 1 are the first two principal components of the data, which summarize the 6 , 830 expression measurements for each cell line down to two numbers or dimensions . While it is likely that this dimension reduction has resulted in

20 1. Introduction 5 20 20 0 0 2 2 Z Z −20 −20 −40 −40 −60 −60 20 40 60 −40 −20 −40 20 40 60 0 −20 0 Z Z 1 1 Left: Representation of the NCI60 gene expression data set in FIGURE 1.4. and Z . Each point corresponds to one of the 64 Z a two-dimensional space, 1 2 cell lines. There appear to be four groups of cell lines, which we have represented using different colors. Same as left panel except that we have represented Right: 14 each of the different types of cancer using a different colored symbol. Cell lines corresponding to the same cancer type tend to be nearby in the two-dimensional space. some loss of information, it is now possible to visually examine the data for e number of clusters is often a diffi- evidence of clustering. Deciding on th cult problem. But the left-hand panel of Figure 1.4 suggests at least four groups of cell lines, which we have represented using separate colors. We can now examine the cell lines within ea ch cluster for similarities in their types of cancer, in order to better understand the relationship between gene expression levels and cancer. In this particular data set, it turns out that the cell lines correspond to 14 different types of cancer. (However, this information was not used to create the left-hand panel of Figure 1.4.) The right-hand panel of Fig- anel, except that the 14 cancer types ure 1.4 is identical to the left-hand p are shown using distinct colored symb ols. There is clear evidence that cell lines with the same cancer type tend to be located near each other in this two-dimensional representation. In addition, even though the cancer infor- mation was not used to produce the left-hand panel, the clustering obtained does bear some resemblance to some of the actual cancer types observed in the right-hand panel. This provides some independent verification of the accuracy of our clu stering analysis. A Brief History of Statistical Learning Though the term statistical learning is fairly new, many of the concepts that underlie the field were developed long ago. At the beginning of the nineteenth century, Legendre and Gauss published papers on the method

21 6 1. Introduction , which implemented the earliest form of what is now known of least squares . The approach was first successfully applied to problems as linear regression in astronomy. Linear regression is used for predicting quantitative values, such as an individual’s salary. In order to predict qualitative values, such as whether a patient survives or dies, or whether the stock market increases or decreases, Fisher proposed in 1936. In the linear discriminant analysis 1940s, various authors put forth an alternative approach, logistic regression . generalized In the early 1970s, Nelder and Wedderburn coined the term for an entire class of statistical learning methods that include linear models both linear and logistic regression as special cases. By the end of the 1970s, many more techniques for learning from data linear methods, be- were available. However, they were almost exclusively non-linear cause fitting relationships was computationally infeasible at the time. By the 1980s, computing technology had finally improved sufficiently that non-linear methods were no longer computationally prohibitive. In mid 1980s Breiman, Friedman, Olshen and Stone introduced classification and regression trees , and were among the first to demonstrate the power of a detailed practical implementation of a method, including cross-validation for model selection. Hastie and Tibshirani coined the term generalized addi- in 1986 for a class of non-linear extensions to generalized linear tive models models, and also provided a practical software implementation. machine learning and other Since that time, inspired by the advent of disciplines, statistical learning has emerged as a new subfield in statistics, focused on supervised and unsupervise d modeling and prediction. In recent years, progress in statistical learning has been marked by the increasing availability of powerful and relatively user-friendly software, such as the system. This has the potential to continue R popular and freely available the transformation of the field from a set of techniques used and developed by statisticians and computer scientists to an essential toolkit for a much broader community. This Book The Elements of Statistical Learning (ESL) by Hastie, Tibshirani, and Friedman was first published in 2001. Since that time, it has become an important reference on the fundamentals of statistical machine learning. Its success derives from its comprehen sive and detailed treatment of many important topics in statistical learning, as well as the fact that (relative to many upper-level statistics textbo oks) it is accessible to a wide audience. However, the greatest factor behin d the success of ESL has been its topical nature. At the time of its publication, interest in the field of statistical

22 1. Introduction 7 learning was starting to explode. ESL provided one of the first accessible and comprehensive introductions to the topic. Since ESL was first published, the field of statistical learning has con- tinued to flourish. The field’s expansion has taken two forms. The most obvious growth has involved the development of new and improved statis- tical learning approaches aimed at answ ering a range of scientific questions across a number of fields. However, t he field of statistical learning has also expanded its audience. In the 1990s, increases in computational power generated a surge of interest in the field from non-statisticians who were eager to use cutting-edge statistical tools to analyze their data. Unfortu- nately, the highly technical nature of these approaches meant that the user ed to experts in statistics, computer community remained primarily restrict science, and related fields with the training (and time) to understand and implement them. In recent years, new and improved software packages have significantly eased the implementation burden for many statistical learning methods. At the same time, there has been grow ing recognition across a number of fields, from business to health care to g enetics to the social sciences and beyond, that statistical learning is a powerful tool with important practical applications. As a result, the field has moved from one of primarily academic interest to a mainstream discipline, with an enormous potential audience. This trend will surely continue with the increasing availability of enormous quantities of data and the software to analyze it. An Introduction to Statistical Learning The purpose of (ISL) is to facili- tate the transition of statistical learning from an academic to a mainstream field. ISL is not intended to replace ESL, which is a far more comprehen- sive text both in terms of the number of approaches considered and the depth to which they are explored. We consider ESL to be an important companion for professionals (with graduate degrees in statistics, machine learning, or related fields) who need to understand the technical details behind statistical learning approaches. However, the community of users of statistical learning techniques has expanded to include individuals with a wider range of interests and backgrounds. Therefore, we believe that there is now a place for a less technical and more accessible version of ESL. In teaching these topics over the years, we have discovered that they are of interest to master’s and PhD students in fields as disparate as business administration, biology, and computer science, as well as to quantitatively- oriented upper-division undergraduates. It is important for this diverse group to be able to understand the models, intuitions, and strengths and weaknesses of the various approaches. But for this audience, many of the technical details behind statistical learning methods, such as optimiza- tion algorithms and theoretical properties, are not of primary interest. We believe that these students do not need a deep understanding of these aspects in order to become informed us ers of the various methodologies, and

23 8 1. Introduction in order to contribute to their chosen fields through the use of statistical learning tools. ISLR is based on the following four premises. 1. Many statistical learning methods are relevant and useful in a wide range of academic and non-academic disciplines, beyond just the sta- We believe that many contempo rary statistical learn- tistical sciences. ing procedures should, and will, become as widely available and used cal methods such as linear regres- as is currently the case for classi sion. As a result, rather than attempting to consider every possible approach (an impossible task), we have concentrated on presenting the methods that we believe are most widely applicable. Statistical learning should not be viewed as a series of black boxes. 2. No single approach will perform well in all possible applications. With- out understanding all of the cogs inside the box, or the interaction between those cogs, it is impossible to select the best box. Hence, we have attempted to carefully describe the model, intuition, assump- tions, and trade-offs behind each of the methods that we consider. 3. While it is important to know what job is performed by each cog, it is not necessary to have the skills to construct the machine inside the box! Thus, we have minimized discussion of technical details related to fitting procedures and theoretical properties. We assume that the reader is comfortable with basic mathematical concepts, but we do not assume a graduate degree in the m athematical sciences. For in- stance, we have almost completely avoided the use of matrix algebra, and it is possible to understand the entire book without a detailed knowledge of matrices and vectors. 4. We presume that the reader is interested in applying statistical learn- In order to facilitate this, as well ing methods to real-world problems. as to motivate the techniques discussed, we have devoted a section R computer labs. In each lab, we walk the within each chapter to reader through a realistic application of the methods considered in that chapter. When we have taught this material in our courses, we have allocated roughly one-third of classroom time to working through the labs, and we have found them to be extremely useful. Many of the less computationally-oriented students who were ini- R ’s command level interface got the hang of tially intimidated by things over the course of the quarter or semester. We have used R because it is freely available and is powerful enough to implement all of the methods discussed in the book. It also has optional packages that can be downloaded to implement literally thousands of addi- R is the language of choice for tional methods. Most importantly, academic statisticians, and new approaches often become available in

24 1. Introduction 9 years before they are implemented in commercial packages. How- R ever, the labs in ISL are self-contained, and can be skipped if the reader wishes to use a different software package or does not wish to apply the methods discussed to real-world problems. Who Should Read This Book? This book is intended for anyone who is interested in using modern statis- tical methods for modeling and prediction from data. This group includes , but also less technical indi- scientists, engineers, data analysts, or quants viduals with degrees in non-quantitative fields such as the social sciences or ill have had at least one elementary business. We expect that the reader w course in statistics. Background in linear regression is also useful, though not required, since we review the key concepts behind linear regression in Chapter 3. The mathematical level of this book is modest, and a detailed knowledge of matrix operations is not required. This book provides an in- R . Previous exposure troduction to the statistical programming language MATLAB Python , is useful but not or to a programming language, such as required. We have successfully taught material at this level to master’s and PhD students in business, computer science, biology, earth sciences, psychology, and many other areas of the physical and social sciences. This book could also be appropriate for advanced undergraduates who have already taken a course on linear regression. In the context of a more mathematically rigorous course in which ESL serves as the primary textbook, ISL could be used as a supplementary text for teaching computational aspects of the various approaches. Notation and Simple Matrix Algebra Choosing notation for a textbook is always a difficult task. For the most part we adopt the same notational conventions as ESL. n to represent the number of distinct data points, or observa- We will use tions, in our sample. We will let denote the number of variables that are p Wage data set con- available for use in making predictions. For example, the sists of 12 variables for 3 , 000 people, so we have n =3 , 000 observations and p = 12 variables (such as year , age , wage , and more). Note that throughout Variable Name . this book, we indicate variable names using colored font: In some examples, p might be quite large, such as on the order of thou- sands or even millions; this situation arises quite often, for example, in the analysis of modern biological data or web-based advertising data.

25 10 1. Introduction represent the value of the th variable for the j In general, we will let x ij =1 , 2 ,...,n and j i , 2 ,...,p . Throughout this th observation, where i =1 will be used to index the samples or observations (from 1 to n )and i book, p ). We let X denote a n × p j will be used to index the variables (from 1 to .Thatis, )th element is i, j x matrix whose ( ij ⎛ ⎞ x ... x x 11 12 p 1 ⎜ ⎟ x x ... x 21 p 22 2 ⎜ ⎟ . X = ⎜ ⎟ . . . . . . . . ⎝ ⎠ . . . . x x ... x n 1 np 2 n X as For readers who are unfamiliar with matrices, it is useful to visualize n p columns. a spreadsheet of numbers with rows and At times we will be interested in the rows of X , which we write as x ,x variable ,...,x p .Here x , containing the is a vector of length p i n 2 1 measurements for the i th observation. That is, ⎞ ⎛ x 1 i ⎟ ⎜ x i 2 ⎟ ⎜ x = (1.1) . ⎟ ⎜ . i . ⎠ ⎝ . x ip Wage (Vectors are by default represented as columns.) For example, for the x , is a vector of length 12, consisting of year , age data, wage , and other i values for the i th individual. At other times we will instead be interested x ,..., x . Each is a vector of , , which we write as x X in the columns of 2 p 1 n .Thatis, length ⎞ ⎛ x j 1 ⎟ ⎜ x j 2 ⎟ ⎜ . x = ⎟ ⎜ . j . ⎠ ⎝ . x nj For example, for the data, x contains the Wage n =3 , 000 values for year . 1 Using this notation, the matrix can be written as X ( ) x x ··· x = X , p 1 2 or ⎞ ⎛ T x 1 T ⎟ ⎜ x 2 ⎟ ⎜ X = . ⎟ ⎜ . . ⎠ ⎝ . T x n

26 1. Introduction 11 T The transpose of a matrix or vector. So, for example, notation denotes the ⎞ ⎛ x ... x x 11 21 n 1 ⎟ ⎜ x ... x x 12 22 2 n ⎟ ⎜ T X = , ⎟ ⎜ . . . . . . ⎠ ⎝ . . . ... x x x 2 1 p np p while ) ( T ··· x x x . = x i 2 i 1 ip i y We use i th observation of the variable on which we to denote the i wish to make predictions, such as wage n . Hence, we write the set of all observations in vector form as ⎞ ⎛ y 1 ⎟ ⎜ y 2 ⎟ ⎜ . = y ⎟ ⎜ . . ⎠ ⎝ . y n Then our observed data consists of { ( x ,where ,y } ) , ( x ) ,y ,y ) ,..., ( x n 2 2 1 1 n x each .(If p =1,then p x is simply a scalar.) is a vector of length i i will always be denoted in lower case In this text, a vector of length n ; e.g. bold ⎛ ⎞ a 1 ⎜ ⎟ a 2 ⎜ ⎟ . = a ⎜ ⎟ . . ⎝ ⎠ . a n n However, vectors that are not of length (such as feature vectors of length , as in (1.1)) will be denoted in lower case normal font , e.g. a p . Scalars will also be denoted in lower case normal font , e.g. a . In the rare cases in which these two uses for lower case normal font lead to ambiguity, we will clarify which use is intended. Matrices will be denoted using bold capitals ,such as A capital normal font , e.g. A , . Random variables will be denoted using regardless of their dimensions. Occasionally we will want to indicate the dimension of a particular ob- a scalar, we will use the notation a ∈ R . ject. To indicate that an object is k n k ∈ R a , we will use To indicate that it is a vector of length ∈ R a (or if it is of length n ). We will indicate that an object is a r × s matrix using r × s A R ∈ . We have avoided using matrix algebra whenever possible. However, in some to avoid it entirely. In these a few instances it becomes too cumber rare instances it is important to understand the concept of multiplying s × d r × d . Then the product and B ∈ R R A two matrices. Suppose that ∈

27 12 1. Introduction AB and .The( i, j )th element of AB is computed by is denoted A B of A by the corresponding element multiplying each element of the i th row of ∑ d .Thatis,( AB ) j th column of of the B = a b . As an example, ik kj ij k =1 consider ( ( ) ) 12 56 and = = B A . 34 78 Then ) )( ( ) ) ( ( × × × 71 × 6+2 5+2 8 19 22 1 12 56 = = = AB . 73 × × 8 × 5+4 × 3 43 50 78 34 6+4 r s matrix. It is only possible to Note that this operation produces an × compute AB if the number of columns of A is the same as the number of B . rows of Organization of This Book Chapter 2 introduces the basic terminology and concepts behind statisti- K -nearest neighbor classifier, a cal learning. This chapter also presents the very simple method that works surprisingly well on many problems. Chap- ters 3 and 4 cover classical linear met hods for regression and classification. linear regression , the fundamental start- In particular, Chapter 3 reviews ing point for all regression methods. In Chapter 4 we discuss two of the logistic regression and lin- most important classical classification methods, ear discriminant analysis . A central problem in all statistical learning situations involves choosing the best method for a given application. Hence, in Chapter 5 we intro- duce and the bootstrap , which can be used to estimate the cross-validation accuracy of a number of different methods in order to choose the best one. Much of the recent research in statist ical learning has concentrated on non-linear methods. However, linear methods often have advantages over their non-linear competitors in terms of interpretability and sometimes also ider a host of linear methods, both accuracy. Hence, in Chapter 6 we cons classical and more modern, which offer potential improvements over stan- dard linear regression. These include stepwise selection , ridge regression , principal components regression , partial least squares ,andthe lasso . The remaining chapters move into the world of non-linear statistical learning. We first introduce in Chapter 7 a number of non-linear methods that work well for problems with a single input variable. We then show how these methods can be used to fit non-linear additive models for which there is more than one input. In Chapter 8, we investigate tree -based methods, including bagging , boosting ,and random forests . Support vector machines , a set of approaches for performing both linear and non-linear classification,

28 1. Introduction 13 are discussed in Chapter 9. Finally, in Chapter 10, we consider a setting in which we have input variables but no output variable. In particular, we , K -means clustering principal components analysis hierarchi- present ,and cal clustering . R lab sections in At the end of each chapter, we present one or more which we systematically work through applications of the various meth- ods discussed in that chapter. These labs demonstrate the strengths and weaknesses of the various approaches, and also provide a useful reference for the syntax required to implement the various methods. The reader may choose to work through the labs at his or her own pace, or the labs may be the focus of group sessions as part of a classroom environment. Within R lab, we present the results that we obtained when we performed each the lab at the time of writing this book. However, new versions of R are continuously released, and over time, the packages called in the labs will be updated. Therefore, in the future, it is possible that the results shown in the lab sections may no longer correspond precisely to the results obtained by the reader who performs the labs. As necessary, we will post updates to the labs on the book website. exercises that contain more symbol to denote sections or We use the challenging concepts. These can be easily skipped by readers who do not wish to delve as deeply into the material, or who lack the mathematical background. Data Sets Used in Labs and Exercises In this textbook, we illustrate statistical learning methods using applica- tions from marketing, finance, biology, and other areas. The ISLR package available on the book website contains a number of data sets that are required in order to perform the lab s and exercises associated with this MASS library, and yet another book. One other data set is contained in the distribution. Table 1.1 contains a summary of the data R is part of the base sets required to perform the labs and exercises. A couple of these data sets are also available as text files on the book website, for use in Chapter 2. Book Website Thewebsiteforthisbookislocatedat www.StatLearning.com

29 14 1. Introduction Description Name Auto Gas mileage, horsepower, and other information for cars. Housing values and other information about Boston suburbs. Boston Caravan Information about individuals offered caravan insurance. Information about car seat sales in 400 stores. Carseats College Demographic characteristics, tuition, and more for USA colleges. Default Customer default records for a credit card company. Hitters Records and salaries for baseball players. Gene expression measurements for four cancer types. Khan NCI60 Gene expression measurements for 64 cancer cell lines. OJ Sales information for Citrus Hill and Minute Maid orange juice. Portfolio Past values of financial assets, for use in portfolio allocation. Smarket Daily percentage returns for S&P 500 over a 5-year period. USArrests Crime statistics per 100,000 residents in 50 states of USA. Income survey data for males in central Atlantic region of USA. Wage Weekly 1,089 weekly stock market returns for 21 years. A list of data sets needed to perform the labs and exercises in this TABLE 1.1. textbook. All data sets are available in the ISLR library, with the exception of Boston (part of MASS )and USArrests (part of the base R distribution) . It contains a number of resources, including the R package associated with this book, and some additional data sets. Acknowledgements A few of the plots in this book were taken from ESL: Figures 6.7, 8.3, and 10.12. All other plots are new to this book.

30 2 Statistical Learning 2.1 What Is Statistical Learning? In order to motivate our study of statistical learning, we begin with a simple example. Suppose that we are statistical consultants hired by a client to provide advice on how to improve sales of a particular product. The data set consists of the of that product in 200 different sales Advertising markets, along with advertising budgets for the product in each of those markets for three different media: radio TV newspaper . The data are , ,and displayed in Figure 2.1. It is not possible for our client to directly increase sales of the product. On the other hand, they can control the advertising expenditure in each of the three medi a. Therefore, if we determine that ising and sales, then we can instruct there is an association between advert our client to adjust advertising budgets, thereby indirectly increasing sales. In other words, our goal is to develop an accurate model that can be used to predict sales on the basis of the three media budgets. sales In this setting, the advertising budgets are while input variables input is an output variable . The input variables are typically denoted using the variable output symbol X , with a subscript to distinguish them. So X might be the TV 1 variable budget, X budget. The inputs the radio budget, and X newspaper the 3 2 predictors independent variables , features , go by different names, such as , predictor or sometimes just variables . The output variable—in this case, sales —is independent often called the , and is typically denoted or dependent variable response variable feature . Throughout this book, we will use all of these terms Y using the symbol variable interchangeably. response dependent variable G. James et al., An Introduction to Statistical Learning: with Applications in R 15 , 2, Springer Texts in Statistics 103, DOI 10.1007/978-1-4614-7138-7 © Springer Science+Business Media New York 2013

31 16 2. Statistical Learning Sales Sales Sales 510152025 510152025 510152025 020406080100 0 1020304050 50 100 200 300 0 Newspaper Radio TV Advertising data set. The plot displays sales , in thousands The FIGURE 2.1. , ,and TV newspaper budgets, in thousands of of units, as a function of radio different markets. In each plot we show the simple least squares dollars, for 200 fit of sales to that variable, as described in Chapter 3. In other words, each blue line represents a simple model that can be used to predict using TV , radio , sales and , respectively. newspaper Y p and More generally, suppose that we observe a quantitative response different predictors, X ,X . We assume that there is some ,...,X p 1 2 ,X ), which can be written ,...,X =( X relationship between Y and X p 2 1 in the very general form Y = f ( X )+ . (2.1) Here f X is some fixed but unknown function of ,...,X  is a random ,and 1 p X and has mean zero. In this formula- error term , which is independent of error term f represents the systematic tion, X provides about Y . information that systematic As another example, consider the left-hand panel of Figure 2.2, a plot of income versus years of education for 30 individuals in the Income data set. The plot suggests that one might be able to predict income using years of education . However, the function that connects the input variable to the f output variable is in general unknown. In this situation one must estimate is a simulated data set, f is Income f basedontheobservedpoints.Since known and is shown by the blue curve in the right-hand panel of Figure 2.2. The vertical lines repr esent the error terms  . We note that some of the 30 observations lie above the blue curve and some lie below it; overall, the errors have approximately mean zero. In general, the function f may involve more than one input variable. and income as a function of years of education In Figure 2.3 we plot seniority .Here f is a two-dimensional surface that must be estimated based on the observed data.

32 2.1 What Is Statistical Learning? 17 80 80 70 70 60 60 50 50 Income Income 40 40 30 30 20 20 12 14 16 18 20 22 18 16 22 20 10 10 14 12 Years of Education Years of Education The data set. Left: The red dots are the observed values FIGURE 2.2. Income income (in tens of thousands of dollars) and years of education for 30 indi- of Right: viduals. The blue curve represents the true underlying relationship between and years of education income , which is generally unknown (but is known in this case because the data were simulated). The black lines represent the error associated with each observation. Note that some errors are positive (if an ob- servation lies above the blue curve) and some are negative (if an observation lies below the curve). Overall, these errors have approximately mean zero. In essence, statistical learning ref ers to a set of approaches for estimating f . In this chapter we outline some of the key theoretical concepts that arise f in estimating , as well as tools for evaluating the estimates obtained. 2.1.1 Why Estimate f ? There are two main reasons that we may wish to estimate f : prediction inference . We discuss each in turn. and Prediction In many situations, a set of inputs are readily available, but the output X cannot be easily obtained. In this se tting, since the error term averages Y Y using to zero, we can predict ˆ ˆ = f Y ( X ) , (2.2) ˆ ˆ where f represents our estimate for f ,and Y represents the resulting pre- ˆ Y diction for f is often treated as a black box , in the sense . In this setting, ˆ that one is not typically concerned with the exact form of f , provided that it yields accurate predictions for Y .

33 18 2. Statistical Learning Income Years of Education Seniority income years of education The plot displays as a function of FIGURE 2.3. Income data set. The blue surface represents the true un- seniority and in the and years of education derlying relationship between seniority , income and which is known since the data are simulated. The red dots indicate the observed individuals. values of these quantities for 30 X ,...,X are characteristics of a patient’s As an example, suppose that p 1 Y is a variable blood sample that can be easily measured in a lab, and re adverse reaction to a particular encoding the patient’s risk for a seve using X , since we can then avoid Y drug. It is natural to seek to predict giving the drug in question to patients who are at high risk of an adverse Y is high. reaction—that is, patients for whom the estimate of ˆ Y as a prediction for The accuracy of depends on two quantities, Y which we will call the reducible error irreducible error . In general, and the reducible ˆ , and this inaccuracy will introduce f will not be a perfect estimate for f error irreducible some error. This error is because we can potentially improve the reducible error ˆ by using the most appropriate statistical learning technique to accuracy of f estimate f . However, even if it were possibl e to form a perfect estimate for ˆ f , so that our estimated response took the form = f ( X ), our prediction Y would still have some error in it! This is because Y is also a function of  , which, by definition, cannot be predicted using X . Therefore, variability associated with  r predictions. This is known also affects the accuracy of ou irreducible error, because no matter how well we estimate as the ,we f cannot reduce the error introduced by  . Why is the irreducible error larger than zero? The quantity  may con- tain unmeasured variables that are useful in predicting Y : since we don’t measure them, f cannot use them for its prediction. The quantity  may also contain unmeasurable variation. For example, the risk of an adverse reaction might vary for a given patient on a given day, depending on

34 2.1 What Is Statistical Learning? 19 elf or the patient’s general feeling manufacturing variation in the drug its of well-being on that day. ˆ and a set of predictors X , which yields the Consider a given estimate f ˆ ˆ ˆ f ( X ). Assume for a moment that both Y f and X are fixed. = prediction Then, it is easy to show that 2 2 ˆ ˆ X f )] = E [ f ( X )+  − ( ) − Y ( E Y 2 ˆ f ) − +Var( f ( X )] X (2.3) , ( ) =[  ︸ ︷︷ ︸ ︸ ︷︷ ︸ Irreducible Reducible 2 ˆ ) − E Y ( where Y represents the average, or expected value , of the squared expected )repre- difference between the predicted and actual value of Y ,andVar(  value sents the associated with the error term  . variance variance The focus of this book is on techniques for estimating with the aim of f minimizing the reducible error. It is important to keep in mind that the irreducible error will always provide an upper bound on the accuracy of our prediction for Y . This bound is almost always unknown in practice. Inference Y is affected as We are often interested in understanding the way that ,...,X change. In this situation we wish to estimate f , but our goal is X 1 p ke predictions for not necessarily to ma . We instead want to understand Y the relationship between X and Y , or more specifically, to understand how ˆ ,...,X cannot be treated as a black .Now f changes as a function of Y X p 1 box, because we need to know its exact form. In this setting, one may be interested in answering the following questions: Which predictors are associated with the response? It is often the case • that only a small fraction of the available predictors are substantially associated with . Identifying the few important predictors among a Y large set of possible variables can be extremely useful, depending on the application. • What is the relationship between the response and each predictor? Some predictors may have a positive relationship with Y , in the sense that increasing the predictor is a ssociated with increasing values of Y . Other predictors may have the opposite relationship. Depending on the complexity of f , the relationship between the response and a given predictor may also depend on the values of the other predictors. • Can the relationship between Y and each predictor be adequately sum- marized using a linear equation, or is the relationship more compli- cated? Historically, most methods for estimating f have taken a linear form. In some situations, such an assumption is reasonable or even de- sirable. But often the true relationship is more complicated, in which case a linear model may not provid e an accurate representation of the relationship between the input and output variables.

35 20 2. Statistical Learning In this book, we will see a number of examples that fall into the prediction setting, the inference setting, or a combination of the two. For instance, consider a company that is interested in conducting a direct-marketing campaign. The goal is to identify individuals who will respond positively to a mailing, based on observations of demographic vari- ables measured on each individual. In this case, the demographic variables serve as predictors, and response to t he marketing campaign (either pos- itive or negative) serves as the out come. The company is not interested in obtaining a deep understanding of the relationships between each in- dividual predictor and the response; instead, the company simply wants an accurate model to predict the response using the predictors. This is an example of modeling for prediction. Advertising data illustrated in Figure 2.1. One In contrast, consider the may be interested in answering questions such as: – Which media contribute to sales? – Which media generate the biggest boost in sales? or How much increase in sales is associated with a given increase in TV – advertising? This situation falls into the inference paradigm. Another example involves modeling the brand of a product that a customer might purchase based on variables such as price, store location, discount levels, competition price, and so forth. In this situation one might really be most interested in how each of the individual variables affects the probability of purchase. For instance, what effect will changing the price of a product have on sales? This is an example of modeling for inference. Finally, some modeling could be conducted both for prediction and infer- tting, one may seek to relate values of ence. For example, in a real estate se homes to inputs such as crime rate, zoning, distance from a river, air qual- ity, schools, income level of community, size of houses, and so forth. In this case one might be interested in how the individual input variables affect how much extra will a house be worth if it has a view the prices—that is, of the river? This is an inference problem. Alternatively, one may simply be interested in predicting the value of a home given its characteristics: is this house under- or over-valued? This is a prediction problem. Depending on whether our ultimate goal is prediction, inference, or a f may be appro- combination of the two, different methods for estimating priate. For example, linear models allow for relatively simple and inter- linear model pretable inference, but may not yield a s accurate predictions as some other approaches. In contrast, some of the highly non-linear approaches that we discuss in the later chapters of this book can potentially provide quite accu- rate predictions for Y , but this comes at the expense of a less interpretable model for which inference is more challenging.

36 2.1 What Is Statistical Learning? 21 f ? 2.1.2 How Do We Estimate Throughout this book, we explore many linear and non-linear approaches f . However, these methods gener for estimating ally share certain charac- teristics. We provide an overview of these shared characteristics in this different n section. We will always assume that we have observed a set of n =30datapoints. data points. For example in Figure 2.2 we observed These observations are called the training data because we will use these training data .Let x f observations to train, or teach, our method how to estimate ij th predictor, or input, for observation represent the value of the ,where j i =1 2 ,...,n and j =1 , 2 ,...,p , y i . Correspondingly, let represent the i response variable for the i th observation. Then our training data consist of T ) ,...,x ,y ,x ) , . ( x ,y =( ) ,..., ( x x ,y where ) } x x ( { 2 i 1 1 i 2 1 ip n i 2 n Our goal is to apply a statistical learning method to the training data in order to estimate the unknown function . In other words, we want to f ˆ ˆ such that Y ≈ f f ( X find a function X,Y ). Broadly ) for any observation ( speaking, most statistical learning methods for this task can be character- parametric or non-parametric ized as either . We now briefly discuss these parametric two types of approaches. non- parametric Parametric Methods Parametric methods involve a two-step model-based approach. 1. First, we make an assumption about the functional form, or shape, . For example, one very simple assumption is that of f f is linear in : X ( f )= β X + + β X + ... + β β X . (2.4) X 2 p 2 p 1 0 1 linear model , which will be discussed extensively in Chap- This is a ter 3. Once we have assumed that f is linear, the problem of estimat- ing f is greatly simplified. Instead of having to estimate an entirely p -dimensional function f ( X ), one only needs to estimate arbitrary ,β ,...,β . p + 1 coefficients β the 1 p 0 2. After a model has been selected, we need a procedure that uses the training data to or train the model. In the case of the linear model fit fit β ,β (2.4), we need to estimate the parameters ,...,β .Thatis,we 1 p 0 train want to find values of these parameters such that . X β + + β ... X + + β X β ≈ Y p 2 1 1 p 0 2 The most common approach to fitting the model (2.4) is referred to as (ordinary) least squares , which we discuss in Chapter 3. How- least squares ever, least squares is one of many possible ways way to fit the linear model. In Chapter 6, we discuss other approaches for estimating the parameters in (2.4). The model-based approach just described is referred to as parametric ; it reduces the problem of estimating f down to one of estimating a set of

37 22 2. Statistical Learning Income Years of Education Seniority Income FIGURE 2.4. data from Fig- A linear model fit by least squares to the ure 2.3. The observations are shown in red, and the yellow plane indicates the least squares fit to the data. parameters. Assuming a parametric form for f simplifies the problem of f because it is generally much ea sier to estimate a set of pa- estimating ,...,β in the linear model (2.4), than it is to fit ,β β rameters, such as 1 p 0 f an entirely arbitrary function . The potential disadvantage of a paramet- ric approach is that the model we choose will usually not match the true f . If the chosen model is too far from the true f ,then unknown form of our estimate will be poor. We can try to address this problem by choos- flexible models that can fit many different possible functional forms ing flexible for f . But in general, fitting a more flexible model requires estimating a greater number of parameters. Thes e more complex models can lead to a phenomenon known as overfitting the data, which essentially means they overfitting noise , too closely. These issues are discussed through- follow the errors, or noise out this book. Figure 2.4 shows an example of the parametric approach applied to the Income data from Figure 2.3. We have fit a linear model of the form . seniority × β + β + × education β ≈ income 1 2 0 Since we have assumed a linear relationship between the response and the ,and β , two predictors, the entire fitting problem reduces to estimating β 0 1 β , which we do using least squares linear regression. Comparing Figure 2.3 2 to Figure 2.4, we can see that the linear fit given in Figure 2.4 is not quite right: the true f has some curvature that is not captured in the linear fit. However, the linear fit still appears to do a reasonable job of capturing the years of education and income ,aswellasthe positive relationship between

38 2.1 What Is Statistical Learning? 23 Income rity Y ears of Education Senio Income A smooth thin-plate spline fit to the FIGURE 2.5. data from Figure 2.3 is shown in yellow; the observations are displayed in red. Splines are discussed in Chapter 7. and income .Itmaybe slightly less positive relationship between seniority that with such a small number of observations, this is the best we can do. Non-parametric Methods Non-parametric methods do not make explicit assumptions about the func- f f that gets as close to the tional form of . Instead they seek an estimate of data points as possible without being too rough or wiggly. Such approaches can have a major advantage over parametric approaches: by avoiding the assumption of a particular functional form for , they have the potential f to accurately fit a wider range of possible shapes for f .Anyparametric approach brings with it the possibility that the functional form used to estimate f is very different from the true f , in which case the resulting model will not fit the data well. In contrast, non-parametric approaches completely avoid this danger, since essentially no assumption about the f form of is made. But non-parametric approaches do suffer from a major f to a disadvantage: since they do not reduce the problem of estimating small number of parameters, a very large number of observations (far more than is typically needed for a parametric approach) is required in order to obtain an accurate estimate for f . data is Income An example of a non-parametric approach to fitting the thin-plate spline is used to estimate f . This ap- showninFigure2.5.A thin-plate proach does not impose any pre-specified model on f . It instead attempts spline to produce an estimate for f that is as close as possible to the observed data, subject to the fit—that is, the yellow surface in Figure 2.5—being

39 24 2. Statistical Learning Income Years of Education Seniority FIGURE 2.6. A rough thin-plate spline fit to the Income data from Figure 2.3. This fit makes zero errors on the training data. smooth . In this case, the non-parametric fit has produced a remarkably ac- curate estimate of the true f shown in Figure 2.3. In order to fit a thin-plate spline, the data analyst must select a level of smoothness. Figure 2.6 shows the same thin-plate spline fit using a lower level of smoothness, allowing for a rougher fit. The resulting estimate fits the observed data perfectly! However, the spline fit shown in Figure 2.6 is far more variable than the true function f , from Figure 2.3. This is an example of overfitting the data, which we discussed previously. It is an undesirable situation because the fit obtained will not yield accurate estimates of the response on new observations that were not part of the original training data set. We dis- cuss methods for choosing the correct amount of smoothness in Chapter 5. Splines are discussed in Chapter 7. As we have seen, there are advantages and disadvantages to parametric and non-parametric methods for statistical learning. We explore both types of methods throughout this book. 2.1.3 The Trade-Off Between Prediction Accuracy and Model Interpretability Of the many methods that we examine in this book, some are less flexible, or more restrictive, in the sense th at they can produce just a relatively small range of shapes to estimate f . For example, linear regression is a relatively inflexible approach, because it can only generate linear functions such as the lines shown in Figure 2.1 or the plane shown in Figure 2.3.

40 2.1 What Is Statistical Learning? 25 Subset Selection Lasso High Least Squares Generalized Additive Models Tr e e s Interpretability Bagging, Boosting Support Vector Machines Low High Low Flexibility FIGURE 2.7. A representation of the tradeoff between flexibility and inter- pretability, using different statistical learning methods. In general, as the flexibil- ity of a method increases, its interpretability decreases. Other methods, such as the thin plate splines shown in Figures 2.5 and 2.6, are considerably more flexible because they can generate a much wider f . range of possible shapes to estimate One might reasonably ask the following question: why would we ever choose to use a more restrictive method instead of a very flexible approach? There are several reasons that we might prefer a more restrictive model. ce, then restrictive models are much If we are mainly interested in inferen more interpretable. For instance, when inference is the goal, the linear model may be a good choice since it will be quite easy to understand . In contrast, very flexible ,X ,...,X X the relationship between Y and 2 p 1 approaches, such as the splines discussed in Chapter 7 and displayed in Figures 2.5 and 2.6, and the boosting methods discussed in Chapter 8, can f that it is difficult to understand lead to such complicated estimates of how any individual predictor is associated with the response. Figure 2.7 provides an illustration of the trade-off between flexibility and interpretability for some of the methods that we cover in this book. Least squares linear regression, discussed in Chapter 3, is relatively inflexible but is quite interpretable. The lasso , discussed in Chapter 6, relies upon the lasso linear model (2.4) but uses an alternative fitting procedure for estimating . The new procedure is more restrictive in es- ,β ,...,β the coefficients β 1 p 0 timating the coefficients, and sets a number of them to exactly zero. Hence in this sense the lasso is a less flexible approach than linear regression. It is also more interpretable than linear regression, because in the final model the response variable will only be related to a small subset of the predictors—namely, those wit h nonzero coefficient estimates. Generalized

41 26 2. Statistical Learning (GAMs), discussed in Chapter additive models 7, instead extend the lin- generalized certain non-linear relationships. Consequently, ear model (2.4) to allow for additive model GAMs are more flexible than linear regression. They are also somewhat less interpretable than linear regression, because the relationship between each predictor and the response is now modeled using a curve. Finally, fully non-linear methods such as boosting ,and support vector machines bagging , bagging with non-linear kernels, discussed in Chapters 8 and 9, are highly flexible boosting approaches that are harder to interpret. support vector We have established that when inference is the goal, there are clear ad- machine vantages to using simple and relatively inflexible statistical learning meth- ods. In some settings, however, we are only interested in prediction, and the interpretability of the predictive model is simply not of interest. For instance, if we seek to develop an algorithm to predict the price of a stock, our sole requirement for the algorithm is that it predict accurately— interpretability is not a concern. In this setting, we might expect that it will be best to use the most flexible model available. Surprisingly, this is not always the case! We will often obtain more accurate predictions using a less flexible method. This phenomenon, which may seem counterintuitive at first glance, has to do with the potential for overfitting in highly flexible methods. We saw an example of overfitting in Figure 2.6. We will discuss this very important concept further in Section 2.2 and throughout this book. 2.1.4 Supervised Versus Unsupervised Learning Most statistical learning problems fall into one of two categories: supervised supervised or unsupervised . The examples that we have discussed so far in this chap- unsupervised ter all fall into the supervised learning domain. For each observation of the predictor measurement(s) x , =1 ,...,n there is an associated response i i . We wish to fit a model that relates the response to the measurement y i predictors, with the aim of accurately predicting the response for future observations (prediction) or better understanding the relationship between nce). Many classical statistical learn- the response and the predictors (infere logistic regression (Chapter 4), as ing methods such as linear regression and logistic well as more modern approaches such as GAM, boosting, and support vec- regression tor machines, operate in the supervised learning domain. The vast majority of this book is devoted to this setting. g describes the somewhat more chal- In contrast, unsupervised learnin lenging situation in which for every observation i =1 ,...,n ,weobserve .Itisnotpos- y but no associated response a vector of measurements x i i sible to fit a linear regression model, since there is no response variable to predict. In this setting, we are in some sense working blind; the sit- uation is referred to as unsupervised becausewelackaresponsevari- able that can supervise our analysis. What sort of statistical analysis is

42 2.1 What Is Statistical Learning? 27 12 10 8 6 2468 24 0246 024681012 FIGURE 2.8. A clustering data set involving three groups. Each group is shown Left: The three groups are well-separated. In using a different colored symbol. this setting, a clustering approach should successfully identify the three groups. There is some overlap among the groups. Now the clustering task is more Right: challenging. possible? We can seek to understand the relationships between the variables or between the observations. One statistical learning tool that we may use in this setting is cluster analysis , or clustering. The goal of cluster analysis cluster is to ascertain, on the basis of x ,...,x , whether the observations fall into analysis 1 n relatively distinct groups. For example, in a market segmentation study we might observe multiple characteristics (variables) for potential customers, such as zip code, family income, and shopping habits. We might believe that the customers fall into different groups, such as big spenders versus low spenders. If the information about each customer’s spending patterns were available, then a supervised analysis would be possible. However, this information is not available—that is, we do not know whether each poten- tial customer is a big spender or not. In this setting, we can try to cluster the customers on the basis of the variables measured, in order to identify distinct groups of potential customers. Identifying such groups can be of interest because it might be that th e groups differ with respect to some property of interest, such as spending habits. Figure 2.8 provides a simple illustration of the clustering problem. We have plotted 150 observations with measurements on two variables, X 1 X . Each observation corresponds to one of three distinct groups. For and 2 illustrative purposes, we have plotted the members of each group using different colors and symbols. However , in practice the group memberships are unknown, and the goal is to determine the group to which each ob- servation belongs. In the left-hand panel of Figure 2.8, this is a relatively easy task because the groups are well-separated. In contrast, the right-hand panel illustrates a more challenging problem in which there is some overlap

43 28 2. Statistical Learning between the groups. A clustering method could not be expected to assign all of the overlapping points to their correct group (blue, green, or orange). In the examples shown in Figure 2.8, there are only two variables, and so one can simply visually inspect the scatterplots of the observations in order to identify clusters. However, in practice, we often encounter data sets that contain many more than two variables. In this case, we cannot easily plot the observation s. For instance, if there are p variables in our ( p data set, then 1) / 2 distinct scatterplots can be made, and visual p − inspection is simply not a viable way to identify clusters. For this reason, automated clustering methods are imp ortant. We discuss clustering and other unsupervised learning approaches in Chapter 10. Many problems fall naturally into the supervised or unsupervised learn- ing paradigms. However, sometimes the question of whether an analysis upervised is less clear-cut. For in- should be considered supervised or uns n stance, suppose that we have a set of of the observa- observations. For m , we have both predictor measurements and a response m

44 2.2 Assessing Model Accuracy 29 K -nearest neighbors method as well. Some statistical methods, such as (Chapters 2 and 4) and boosting (Chapter 8), can be used in the case of either quantitative or qualitative responses. We tend to select statistical learn ing methods on the basis of whether the response is quantitative or qualitative; i.e. we might use linear regres- sion when quantitative and logistic regression when qualitative. However, are qualitative or quantitative is generally consid- whether the predictors ered less important. Most of the statistical learning methods discussed in this book can be applied regardless of the predictor variable type, provided that any qualitative predictors are properly coded before the analysis is performed. This is discussed in Chapter 3. 2.2 Assessing Model Accuracy One of the key aims of this book is to introduce the reader to a wide range of statistical learning m ethods that extend far beyond the standard linear regression approach. Why is it necessary to introduce so many different statistical learning approaches, rather than just a single method? There best is no free lunch in statistics: no one method dominates all others over all possible data sets. On a particular data set, one specific method may work best, but some other method may work better on a similar but different data set. Hence it is an important task to decide for any given set of data which method produces the best resul ts. Selecting the best approach can be one of the most challenging parts of performing statistical learning in practice. In this section, we discuss some of the most important concepts that arise in selecting a statis tical learning procedure for a specific data set. As the book progresses, we will explai n how the concepts presented here can be applied in practice. 2.2.1 Measuring the Quality of Fit In order to evaluate the performance of a statistical learning method on a given data set, we need some way to m easure how well its predictions actually match the observed data. That is, we need to quantify the extent to which the predicted response value for a given observation is close to the true response value for that observation. In the regression setting, the most commonly-used measure is the (MSE), given by mean squared error mean squared error n ∑ 1 2 ˆ MSE , (2.5) ( y )) − = f ( x i i n =1 i

45 30 2. Statistical Learning ˆ ˆ f ) is the prediction that where f gives for the i th observation. The MSE ( x i are very close to the true responses, will be small if the predicted responses and will be large if for some of the observations, the predicted and true responses differ substantially. The MSE in (2.5) is computed using the training data that was used to training fit the model, and so should more accurately be referred to as the . But in general, we do not really care how well the method works MSE training we are interested in the accuracy of the pre- on the training data. Rather, MSE dictions that we obtain when we apply our method to previously unseen test data . Why is this what we care about? Suppose that we are interested test data in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how w ell it will predict tomorrow’s price or next month’s price. On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learn- ing method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for future based on their clinical measurem ents. We are not very interested patients predicts diabetes risk for patients in whether or not the method accurately used to train the model, since we already know which of those patients have diabetes. To state it more mathematically, suppose that we fit our statistical learn- , } ) ,y ,y x ) , ( x ( ,y ,..., ) x ( { ing method on our training observations 2 1 n 1 n 2 ˆ ˆ ˆ ˆ . We can then compute f f ( x and we obtain the estimate , ). f ( x x ) ,..., ) f ( 2 n 1 , then the training MSE ,...,y ,y y If these are approximately equal to n 2 1 given by (2.5) is small. However, we are really not interested in whether ˆ ˆ x f ( f ≈ ) is approximately equal ; instead, we want to know whether ) y ( x 0 i i previously unseen test observation not used to train )isa ,y ,where( x y to 0 0 0 the statistical learning method . We want to choose the method that gives the lowest test MSE , as opposed to the lowest training MSE. In other words, test MSE if we had a large number of test observations, we could compute 2 ˆ Ave( f ( x ) − y , ) (2.6) 0 0 the average squared prediction error for these test observations ( x ,y ). 0 0 We’d like to select the model for which the average of this quantity—the test MSE—is as small as possible. How can we go about trying to select a method that minimizes the test MSE? In some settings, we may have a test data set available—that is, we may have access to a set of observations that were not used to train the statistical learning method. We can then simply evaluate (2.6) on the test observations, and select the learning method for which the test MSE is

46 2.2 Assessing Model Accuracy 31 2.5 2.0 1.5 Y 1.0 Mean Squared Error 0.5 24681012 0.0 10 5 2 20 0 20406080100 Flexibility X Left: Data simulated from f FIGURE 2.9. , shown in black. Three estimates of f are shown: the linear regression line (orange curve), and two smoothing spline fits (blue and green curves). Right: Training MSE (grey curve), test MSE (red curve), and minimum possible test MSE over all methods (dashed line). Squares represent the training and test MSEs for the three fits shown in the left-hand panel. smallest. But what if no test observations are available? In that case, one might imagine simply selecting a statistical learning method that minimizes the training MSE (2.5). This seems like it might be a sensible approach, since the training MSE and the test MSE appear to be closely related. Unfortunately, there is a fundamental problem with this strategy: there is no guarantee that the method with the lowest training MSE will also have the lowest test MSE. Roughly speaking, the problem is that many statistical methods specifically esti mate coefficients so as to minimize the training set MSE. For these methods, the training set MSE can be quite small, but the test MSE is often much larger. Figure 2.9 illustrates this phenomenon on a simple example. In the left- hand panel of Figure 2.9, we have generated observations from (2.1) with the true f given by the black curve. The orange, blue and green curves illus- trate three possible estimates for f obtained using methods with increasing levels of flexibility. The orange line is the linear regression fit, which is rela- smoothing tively inflexible. The blue and green curves were produced using splines , discussed in Chapter 7, with different levels of smoothness. It is smoothing clear that as the level of flexibility increases, the curves fit the observed spline data more closely. The green curve is the most flexible and matches the data very well; however, we observe that it fits the true f (shown in black) poorly because it is too wiggly. By adjusting the level of flexibility of the smoothing spline fit, we can produce many different fits to this data.

47 32 2. Statistical Learning We now move on to the right-hand panel of Figure 2.9. The grey curve displays the average training MSE as a function of flexibility, or more for- , for a number of smoothing splines. The de- mally the degrees of freedom degrees of grees of freedom is a quantity that summarizes the flexibility of a curve; it freedom is discussed more fully in Chapter 7. The orange, blue and green squares indicate the MSEs associated with th e corresponding curves in the left- hand panel. A more restricted and hence smoother curve has fewer degrees of freedom than a wiggly curve—note that in Figure 2.9, linear regression is at the most restrictive end, with two degrees of freedom. The training MSE declines monotonically as flexibility increases. In this example the f true is non-linear, and so the orange linear fit is not flexible enough to estimate well. The green curve has the lowest training MSE of all three f methods, since it corresponds to the most flexible of the three curves fit in the left-hand panel. f , and so we can also com- In this example, we know the true function pute the test MSE over a very large test set, as a function of flexibility. (Of course, in general is unknown, so this will not be possible.) The test MSE f is displayed using the red curve in the right-hand panel of Figure 2.9. As with the training MSE, the test MSE initially declines as the level of flex- ibility increases. However, at some point the test MSE levels off and then starts to increase again. Consequently, the orange and green curves both have high test MSE. The blue curve minimizes the test MSE, which should not be surprising given that visually it appears to estimate f the best in the  left-hand panel of Figure 2.9. The horizontal dashed line indicates Var( ), the irreducible error in (2.3), which corresponds to the lowest achievable test MSE among all possible methods. Hence, the smoothing spline repre- sented by the blue curve is close to optimal. In the right-hand panel of Figure 2.9, as the flexibility of the statistical learning method increases, we observe a monotone decrease in the training U-shape in the test MSE. This is a fundamental property of MSE and a statistical learning that holds regardless of the particular data set at hand and regardless of the statistical method being used. As model flexibility increases, training MSE will decrease, but the test MSE may not. When a given method yields a small training MSE but a large test MSE, we are said to be overfitting the data. This happens becaus e our statistical learning procedure is working too hard to find patterns in the training data, and may be picking up some patterns that are just caused by random chance rather than by true properties of the unknown function f .Whenweoverfit the training data, the test MSE will be very large because the supposed patterns that the method found in the training data simply don’t exist in the test data. Note that regardless of whether or not overfitting has occurred, we almost always expect the training MSE to be smaller than the test MSE because most statistical learning methods either directly or indirectly seek to minimize the train ing MSE. Overfitting refers specifically to the case in which a less flexible model would have yielded a smaller test MSE.

48 2.2 Assessing Model Accuracy 33 2.5 2.0 1.5 Y 1.0 Mean Squared Error 0.5 24681012 0.0 0 20406080100 10 5 2 20 X Flexibility Details are as in Figure 2.9, using a different true f that is FIGURE 2.10. much closer to linear. In this setting, linear regression provides a very good fit to the data. f is approxi- Figure 2.10 provides another example in which the true mately linear. Again we observe that the training MSE decreases mono- tonically as the model flexibility increases, and that there is a U-shape in the test MSE. However, because the tr uth is close to linear, the test MSE only decreases slightly before increasing again, so that the orange least squares fit is substantially better than the highly flexible green curve. Fi- f nally, Figure 2.11 displays an example in which is highly non-linear. The training and test MSE curves still exhibit the same general patterns, but now there is a rapid decrease in both curves before the test MSE starts to increase slowly. In practice, one can usually compute the training MSE with relative ease, but estimating test MSE is considerably more difficult because usually no test data are available. As the previous three examples illustrate, the flexibility level corresponding to the model with the minimal test MSE can vary considerably among data sets. Throughout this book, we discuss a variety of approaches that can be used in practice to estimate this minimum point. One important method is cross-validation (Chapter 5), which is a cross- validation method for estimating test MSE using the training data. 2.2.2 The Bias-Variance Trade-Off The U-shape observed in the test MSE curves (Figures 2.9–2.11) turns out to be the result of two competing properties of statistical learning methods. Though the mathematical proof is beyond the scope of this book, it is ,can possible to show that the expected test MSE, for a given value x 0

49 34 2. Statistical Learning 20 20 15 10 Y 10 0 Mean Squared Error 5 −10 0 20 10 5 2 0 20406080100 Flexibility X f that is far from FIGURE 2.11. Details are as in Figure 2.9, using a different linear. In this setting, linear regression provides a very poor fit to the data. always be decomposed into the sum of three fundamental quantities: the ˆ ˆ ( x variance f of bias ), the squared f ( x ) and the variance of the error of 0 0 variance terms  .Thatis, bias ) ( 2 2 ˆ ˆ ˆ x ) ) (2.7) . =Var( − f ( x  )) + [Bias( f f ( x +Var( ))] ( y E 0 0 0 0 ( ) 2 ˆ E Here the notation y f ( x ) − , and refers defines the expected test MSE 0 0 expected test MSE to the average test MSE that we would obtain if we repeatedly estimated .Theoverall x using a large number of training sets, and tested each at f 0 ) ( 2 ˆ − over all f ( x ) y expected test MSE can be computed by averaging E 0 0 possible values of x in the test set. 0 Equation 2.7 tells us that in order to minimize the expected test error, we need to select a statistical learning method that simultaneously achieves and low bias . Note that variance is inherently a nonnegative low variance quantity, and squared bias is also nonnegative. Hence, we see that the  ), the irreducible error from expected test MSE can never lie below Var( (2.3). What do we mean by the variance and bias of a statistical learning ˆ Variance refers to the amount by which method? f would change if we estimated it using a different training data set. Since the training data are used to fit the statistical learning method, different training data sets ˆ should not vary f . But ideally the estimate for f will result in a different too much between training sets. However, if a method has high variance ˆ then small changes in the training data can result in large changes in f .In general, more flexible statistical methods have higher variance. Consider the

50 2.2 Assessing Model Accuracy 35 green and orange curves in Figure 2.9. The flexible green curve is following the observations very closely. It has high variance because changing any ˆ f to change considerably. one of these data points may cause the estimate In contrast, the orange least squares line is relatively inflexible and has low variance, because moving any single observation will likely cause only a small shift in the position of the line. refers to the error that is introduced by approxi- On the other hand, bias mating a real-life problem, which may be extremely complicated, by a much simpler model. For example, linear reg ression assumes that there is a linear ,...,X ,X . It is unlikely that any real-life X Y relationship between and p 2 1 problem truly has such a simple linear relationship, and so performing lin- ear regression will undoubtedly result in some bias in the estimate of .In f f is substantially non-linear, so no matter how many Figure 2.11, the true training observations we are given, it will not be possible to produce an linear regression. In other words, linear regression accurate estimate using results in high bias in this example. However, in Figure 2.10 the true f is very close to linear, and so given enough data, it should be possible for linear regression to produce an accurate estimate. Genera lly, more flexible methods result in less bias. As a general rule, as we use more flexible methods, the variance will increase and the bias will decrease. The relative rate of change of these two quantities determines whether th e test MSE increases or decreases. As we increase the flexibility of a class of methods, the bias tends to initially decrease faster than the variance inc reases. Consequently, the expected test MSE declines. However, at some point increasing flexibility has little cantly increase th e variance. When impact on the bias but starts to signifi this happens the test MSE increases. Note that we observed this pattern of decreasing test MSE followed by increasing test MSE in the right-hand panels of Figures 2.9–2.11. The three plots in Figure 2.12 illustrate Equation 2.7 for the examples in Figures 2.9–2.11. In each case the blue solid curve represents the squared bias, for different levels of flexibility, while the orange curve corresponds to the variance. The horizontal dashed line represents Var( ), the irreducible  error. Finally, the red curve, corresponding to the test set MSE, is the sum of these three quantities. In all three c ases, the variance increases and the bias decreases as the method’s flexibility increases. However, the flexibility level corresponding to the optimal test MSE differs considerably among the three data sets, because the squared bias and variance change at different rates in each of the data sets. In the left-hand panel of Figure 2.12, the bias initially decreases rapidly, resulting in an initial sharp decrease in the expected test MSE. On the other hand, in the center panel of Figure 2.12 the true f is close to linear, so there is onl y a small decrease in bias as flex- ibility increases, and the test MSE only declines slightly before increasing rapidly as the variance increases. Finally, in the right-hand panel of Fig- ure 2.12, as flexibility increases, there is a dramatic decline in bias because

51 36 2. Statistical Learning 20 MSE 2.5 2.5 Bias Var 2.0 2.0 15 1.5 1.5 10 1.0 1.0 5 0.5 0.5 0 0.0 0.0 5 10 20 10 20 51020 2 2 5 2 Flexibility Flexibility Flexibility Squared bias (blue curve), variance (orange curve), Var FIGURE 2.12.  ) ( (dashed line), and test MSE (red curve) for the three data sets in Figures 2.9–2.11. The vertical dotted line indicates the flexibility level corresponding to the smallest test MSE. the true f is very non-linear. There is also very little increase in variance as flexibility increases. Consequently, the test MSE declines substantially before experiencing a small increase as model flexibility increases. nce, and test set MSE given in Equa- The relationship between bias, varia tion 2.7 and displayed in Figure 2.12 is referred to as the bias-variance trade-off . Good test set performance of a statistical learning method re- bias-variance quires low variance as well as low squared bias. This is referred to as a trade-off trade-off because it is easy to obtai n a method with extremely low bias but high variance (for instance, by drawing a curve that passes through every single training observation) or a method with very low variance but high bias (by fitting a horizontal line to the data). The challenge lies in finding a method for which both the variance and the squared bias are low. This trade-off is one of the most important recurring themes in this book. In a real-life situation in which is unobserved, it is generally not pos- f sible to explicitly compute the test MSE, bias, or variance for a statistical learning method. Nevertheless, one s hould always keep the bias-variance trade-off in mind. In this book we explore methods that are extremely flexible and hence can essentially eliminate bias. However, this does not guarantee that they will outperform a much simpler method such as linear regression. To take an extreme example, suppose that the true f is linear. In this situation linear regression will have no bias, making it very hard for a more flexible method to compete. In contrast, if the true f is highly non-linear and we have an ample number of training observations, then we may do better using a highly flexible approach, as in Figure 2.11. In Chapter 5 we discuss cross-validation, which is a way to estimate the test MSE using the training data.

52 2.2 Assessing Model Accuracy 37 2.2.3 The Classification Setting Thus far, our discussion of model accuracy has been focused on the regres- sion setting. But many of the concepts that we have encountered, such as the bias-variance trade-off, transfer over to the classification setting is no longer numer- with only some modifications due to the fact that y i f on the basis of training obser- ical. Suppose that we seek to estimate are qualitative. The ,...,y y ,y , where now ) ,..., ( x } ,y ) { x ( vations n 1 1 1 n n ˆ most common approach for quantifying the accuracy of our estimate is f , the proportion of mistakes that are made if we apply error rate the training error rate ˆ our estimate f to the training observations: n ∑ 1 =ˆ ( y (2.8)  I y . ) i i n i =1 ˆ y Here ˆ is the predicted class label for the i th observation using f .And i indicator variable y =ˆ  =ˆ y y )isan . that equals 1 if y and zero if  =ˆ y y I ( i i i i i i indicator th observation was classified correctly by our If I ( y i  =ˆ y ) = 0 then the variable i i classification method; otherwise it was misclassified. Hence Equation 2.8 computes the fraction of inc orrect classifications. training error rate because it is com- Equation 2.8 is referred to as the training puted based on the data that was used to train our classifier. As in the error regression setting, we are most interested in the error rates that result from applying our classifier to test observations that were not used in training. rate associated with a set of test observations of the form The test error test error ( ,y x )isgivenby 0 0 (2.9)  =ˆ y , )) I Ave ( y ( 0 0 y where ˆ is the predicted class label that results from applying the classifier 0 .A good classifier is one for which to the test observation with predictor x 0 the test error (2.9) is smallest. The Bayes Classifier It is possible to show (though the proof is outside of the scope of this book) that the test error rate given in (2.9) is minimized, on average, by a assigns each observation to the most likely class, very simple classifier that given its predictor values . In other words, we should simply assign a test for which to the class j x observation with predictor vector 0 (2.10) ) j | X = x Pr( = Y 0 is largest. Note that (2.10) is a conditional probability : it is the probability conditional that Y = j , given the observed predictor vector x . This very simple clas- probability 0 sifier is called the Bayes classifier . In a two-class problem where there are Bayes only two possible response values, say class 1 or class 2 , the Bayes classifier classifier

53 2. Statistical Learning 38 o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o A simulated data set consisting of 100 FIGURE 2.13. observations in each of two groups, indicated in blue and in orange. The purple dashed line represents the Bayes decision boundary. The orange background grid indicates the region in which a test observation will be assigned to the orange class, and the blue background grid indicates the region in which a test observation will be assigned to the blue class. > =1 | X = x corresponds to predicting class one if Pr( ) Y 0 . 5, and class 0 two otherwise. Figure 2.13 provides an example using a simulated data set in a two- . The orange and and X X dimensional space consisting of predictors 2 1 blue circles correspond to training observations that belong to two different classes. For each value of X X , there is a different probability of the and 1 2 response being orange or blue. Since this is simulated data, we know how the data were generated and we can calculate the conditional probabilities and X . The orange shaded region reflects the set of X for each value of 2 1 Y points for which Pr( | X ) is greater than 50 %, while the blue =orange shaded region indicates the set of points for which the probability is below 50 %. The purple dashed line represents the points where the probability is exactly 50 %. This is called the .TheBayes Bayes decision boundary Bayes classifier’s prediction is determined by the Bayes decision boundary; an decision boundary observation that falls on the orange side of the boundary will be assigned to the orange class, and similarly an observation on the blue side of the boundary will be assigned to the blue class. The Bayes classifier produces the low est possible test error rate, called the Bayes error rate . Since the Bayes classifier will always choose the class Bayes error − X = x = will be 1 for which (2.10) is largest, the error rate at max Y Pr( rate j 0 j | X = x ). In general, the overall Bayes error rate is given by 0 ) ( ) , X | Pr( (2.11) = j Y max E − 1 j

54 2.2 Assessing Model Accuracy 39 where the expectation averages the probability over all possible values of . For our simulated data, the Bayes error rate is 0 1304. It is greater than . X zero, because the classes overlap in the true population so max Pr( Y = j 1 for some values of . The Bayes error rate is analogous to < ) x = X | j x 0 0 the irreducible error, discussed earlier. K-Nearest Neighbors In theory we would always like to predict qualitative responses using the Bayes classifier. But for real data, we do not know the conditional distri- given X bution of Y , and so computing the Bayes classifier is impossi- erves as an unattainable gold standard ble. Therefore, the Bayes classifier s against which to compare other methods. Many approaches attempt to Y given X , and then classify a estimate the conditional distribution of estimated given observation to the class with highest probability. One such K -nearest neighbors (KNN) classifier. Given a positive in- method is the K -nearest K and a test observation x teger , the KNN classifier first identifies the neighbors 0 , represented by N . points in the training data that are closest to x K 0 0 It then estimates the conditional probability for class j as the fraction of whose response values equal j : points in N 0 ∑ 1 Pr( | X = x = Y j ) ( (2.12) . )= I j y = i 0 K ∈N i 0 Finally, KNN applies Bayes rule and classifies the test observation x to 0 the class with the largest probability. Figure 2.14 provides an illustrative example of the KNN approach. In the left-hand panel, we have plotted a small training data set consisting of six blue and six orange observations. Our goal is to make a prediction for the point labeled by the black cross. Suppose that we choose K =3.Then KNN will first identify the three observations that are closest to the cross. This neighborhood is shown as a circle. It consists of two blue points and / 3 for the blue one orange point, resulting in estimated probabilities of 2 class and 1 / 3 for the orange class. Hence KNN will predict that the black cross belongs to the blue class. In the right-hand panel of Figure 2.14 we have applied the KNN approach with K = 3 at all of the possible values for , and have drawn in the corresponding KNN decision boundary. X and X 1 2 Despite the fact that it is a very simple approach, KNN can often pro- duce classifiers that are surprisingly close to the optimal Bayes classifier. Figure 2.15 displays the KNN decision boundary, using K = 10, when ap- plied to the larger simulated data set from Figure 2.13. Notice that even though the true distribution is not known by the KNN classifier, the KNN decision boundary is very close to that of the Bayes classifier. The test error rate using KNN is 0 . 1363, which is close to the Bayes error rate of 0 . 1304.

55 40 2. Statistical Learning o o o o o o o o o o o o o o o o o o o o o o o o The KNN approach, using K =3 , is illustrated in a simple FIGURE 2.14. Left: situation with six blue observations and six orange observations. a test ob- servation at which a predicted class label is desired is shown as a black cross. The three closest points to the test observation are identified, and it is predicted that the test observation belongs to the most commonly-occurring class, in this case blue. The KNN decision boundary for this example is shown in black. The Right: blue grid indicates the region in which a test observation will be assigned to the blue class, and the orange grid indicates the region in which it will be assigned to theorangeclass. The choice of K has a drastic effect on the KNN classifier obtained. Figure 2.16 displays two KNN fits to the simulated data from Figure 2.13, using =1and K = 100. When K = 1, the decision boundary is overly K flexible and finds patterns in the data that don’t correspond to the Bayes decision boundary. This corresponds to a classifier that has low bias but K grows, the method becomes less flexible and very high variance. As produces a decision boundary that is close to linear. This corresponds to a low-variance but high-bias classifier. On this simulated data set, neither K =1nor K = 100 give good predictions: they have test error rates of 0 . 1695 and 0 . 1925, respectively. Just as in the regression setting, there is not a strong relationship be- K =1,the tween the training error rate and the test error rate. With KNN training error rate is 0, but the test error rate may be quite high. In general, as we use more flexible classification methods, the training error rate will decline but the test error rate may not. In Figure 2.17, we have plotted the KNN test and training errors as a function of 1 /K .As1 /K in- creases, the method becomes more flexib le. As in the regression setting, the training error rate consistently declines as the flexibility increases. However, the test error exhibits a characteristic U-shape, declining at first (with a minimum at approximately K = 10) before increasing again when the method becomes excessively flexible and overfits.

56 2.2 Assessing Model Accuracy 41 KNN: K=10 o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o The black curve indicates the KNN decision boundary on the FIGURE 2.15. data from Figure 2.13, using =10 K . The Bayes decision boundary is shown as a purple dashed line. The KNN and Bayes decision boundaries are very similar. KNN: K=1 KNN: K=100 o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o FIGURE 2.16. A comparison of the KNN decision boundaries (solid black curves) obtained using K =1 and K = 100 on the data from Figure 2.13. With K =1 , the decision boundary is overly flexible, while with K = 100 it is not sufficiently flexible. The Bayes decision boundary is shown as a purple dashed line.

57 42 2. Statistical Learning 0.20 0.15 0.10 Error Rate 0.05 Training Errors Test Errors 0.00 0.50 0.20 0.10 1.00 0.01 0.05 0.02 1/K FIGURE 2.17. The KNN training error rate (blue, 200 observations) and test error rate (orange, 5,000 observations) on the data from Figure 2.13, as the level of flexibility (assessed using 1 /K ) increases, or equivalently as the number of neighbors K decreases. The black dashed line indicates the Bayes error rate. The jumpiness of the curves is due to the small size of the training data set. In both the regression and classification settings, choosing the correct level of flexibility is critical to the success of any statistical learning method. The bias-variance tradeoff, and the resulting U-shape in the test error, can make this a difficult task. In Chapter 5, we return to this topic and discuss various methods for estimating test error rates and thereby choosing the optimal level of flexibility for a given statistical learning method. 2.3 Lab: Introduction to R R commands. The best way to In this lab, we will introduce some simple learn a new language is to try out the commands. R can be downloaded from http://cran.r-project.org/ 2.3.1 Basic Commands R uses functions to perform operations. To run a function called funcname , function we type funcname(input1, input2) , where the inputs (or arguments ) input1 argument

58 2.3 Lab: Introduction to R 43 input2 tell how to run the function. A function can have any number and R of inputs. For example, to create a vector of numbers, we use the function (for ). Any numbers inside the parentheses are joined to- c() concatenate c() to join together the numbers gether. The following command instructs R named 1, 3, 2, and 5, and to save them as a vector x .Whenwetype x ,it vector gives us back the vector. > x <- c(1,3,2,5) >x [1] 1 3 2 5 Note that the is not part of the command; rather, it is printed by R to > indicate that it is ready for another command to be entered. We can also = rather than : <- save things using > x = c(1,6,2) >x [1] 1 6 2 > y = c(1,4,3) Hitting the up arrow multiple times will display the previous commands, which can then be edited. This is use ful since one often wishes to repeat ?funcname R to will always cause a similar command. In addition, typing open a new help file window with additional information about the function funcname . We can tell R to add two sets of numbers together. It will then add the first number from to the first number from y x x and , and so on. However, y should be the same length. We ca n check their length using the length() length() function. > length(x) [1] 3 > length(y) [1] 3 >x+y [1] 2 10 5 The ls() function allows us to look at a list of all of the objects, such ls() as data and functions, that we have saved so far. The rm() function can be rm() used to delete any that we don’t want. >ls() [1] "x" "y" > rm(x,y) >ls() character (0) It’s also possible to remove all objects at once: > rm(list=ls())

59 44 2. Statistical Learning matrix() function can be used to create a matrix of numbers. Before The matrix() matrix() we use the function, we can learn more about it: > ?matrix The help file reveals that the matrix() function takes a number of inputs, but for now we focus on the first three: the data (the entries in the matrix), the number of rows, and the number of columns. First, we create a simple matrix. > x=matrix(data=c(1,2,3,4), nrow=2, ncol=2) >x [,1] [,2] [1,] 1 3 [2,] 2 4 data= , Note that we could just as well omit typing ,and ncol= in the nrow= matrix() command above: that is, we could just type > x=matrix(c(1,2,3,4),2,2) and this would have the same effect. However, it can sometimes be useful to R will assume specify the names of the arguments passed in, since otherwise that the function arguments are passed into the function in the same order that is given in the function’s help file. As this example illustrates, by creates matrices by successively filling in columns. Alternatively, R default the byrow=TRUE option can be used to populate the matrix in order of the rows. > matrix(c(1,2,3,4),2,2,byrow=TRUE) [,1] [,2] [1,] 1 2 [2,] 3 4 Notice that in the above command we did not assign the matrix to a value such as x . In this case the matrix is printed to the screen but is not saved sqrt() function returns the square root of each for future calculations. The sqrt() element of a vector or matrix. The command raises each element of x x^2 to the power 2 ; any powers are possible, including fractional or negative powers. > sqrt(x) [,1] [,2] [1,] 1.00 1.73 [2,] 1.41 2.00 >x^2 [,1] [,2] [1,] 1 9 [2,] 4 16 The rnorm() function generates a vector of random normal variables, rnorm() with first argument n the sample size. Each time we call this function, we will get a different answer. Here we create two correlated sets of numbers, x and y , and use the cor() function to compute the correlation between cor() them.

60 2.3 Lab: Introduction to R 45 > x=rnorm(50) > y=x+rnorm(50,mean=50,sd=.1) > cor(x,y) [1] 0.995 rnorm() By default, creates standard normal random variables with a mean of 0 and a standard deviation of 1. However, the mean and standard devi- and sd arguments, as illustrated above. mean ation can be altered using the Sometimes we want our code to reproduce the exact same set of random numbers; we can use the set.seed() function to do this. The set.seed() set.seed() function takes an (arbitrary) integer argument. > set.seed(1303) > rnorm(50) [1] -1.1440 1.3421 2.1854 0.5364 0.0632 0.5022 - 0.0004 ... We use throughout the labs whenever we perform calculations set.seed() involving random quantities. In general this should allow the user to re- produce our results. However, it should be noted that as new versions of R become available it is possible that some small discrepancies may form R . between the book and the output from The mean() and var() functions can be used to compute the mean and mean() var() variance of a vector of numbers. Applying sqrt() to the output of var() will give the standard deviation. Or we can simply use the function. sd() sd() > set.seed(3) > y=rnorm(100) > mean(y) [1] 0.0110 > var(y) [1] 0.7329 > sqrt(var(y)) [1] 0.8561 > sd(y) [1] 0.8561 2.3.2 Graphics The plot() function is the primary way to plot data in R . For instance, plot() plot(x,y) produces a scatterplot of the numbers in x versus the numbers . There are many additional options that can be passed in to the plot() y in function. For example, passing in the argument xlab will result in a label function, plot() -axis. To find out more information about the on the x type ?plot . > x=rnorm(100) > y=rnorm(100) > plot(x,y) > plot(x,y,xlab="this is the x-axis",ylab="this is the y-axis", main="Plot of X vs Y")

61 46 2. Statistical Learning R plot. The command that we We will often want to save the output of an use to do this will depend on the file type that we would like to create. For instance, to create a pdf, we use the pdf() function, and to create a jpeg, pdf() function. jpeg() we use the jpeg() > pdf("Figure.pdf") > plot(x,y,col="green") > dev.off() null device 1 The function dev.off() indicates to R that we are done creating the plot. dev.off() Alternatively, we can simply copy the plot window and paste it into an appropriate file type, such as a Word document. seq() can be used to create a sequence of numbers. For The function seq() seq(a,b) a and b .Thereare instance, makes a vector of integers between seq(0,1,length=10) makes a sequence of many other options: for instance, numbers that are equally spaced between 10 and 1 . Typing 3:11 is a 0 shorthand for seq(3,11) for integer arguments. > x=seq(1,10) >x [1]12345678910 > x=1:10 >x [1]12345678910 > x=seq(-pi,pi,length=50) We will now create some more sophisticated plots. The contour() func- contour() tion produces a contour plot in order to represent three-dimensional data; contour plot it is like a topographical map. It takes three arguments: 1. A vector of the x values (the first dimension), 2. A vector of the y values (the second dimension), and 3. A matrix whose elements correspond to the z value (the third dimen- ) coordinates. x , y sion) for each pair of ( As with the function, there are many other inputs that can be used plot() to fine-tune the output of the contour() function. To learn more about ?contour . these, take a look at the help file by typing >y=x > f=outer(x,y,function(x,y)cos(y)/(1+x^2)) > contour(x,y,f) > contour(x,y,f,nlevels=45,add=T) > fa=(f-t(f))/2 > contour(x,y,fa,nlevels=15) The image() function works the same way as contour() , except that it image() produces a color-coded plot whose colors depend on the z value. This is

62 2.3 Lab: Introduction to R 47 heatmap , and is sometimes used to plot temperature in weather known as a heatmap persp() can be used to produce a three-dimensional forecasts. Alternatively, persp() plot. The arguments phi control the angles at which the plot is theta and viewed. > image(x,y,fa) > persp(x,y,fa) > persp(x,y,fa,theta=30) > persp(x,y,fa,theta=30,phi=20) > persp(x,y,fa,theta=30,phi=70) > persp(x,y,fa,theta=30,phi=40) 2.3.3 Indexing Data We often wish to examine part of a set of data. Suppose that our data is stored in the matrix . A > A=matrix(1:16,4,4) >A [,1] [,2] [,3] [,4] [1,] 1 5 9 13 [2,] 2 6 10 14 [3,] 3 7 11 15 [4,] 4 8 12 16 Then, typing >A[2,3] [1] 10 will select the element corresponding to the second row and the third col- umn. The first number after the open-bracket symbol [ always refers to the row, and the second number always refers to the column. We can also select multiple rows and columns at a time, by providing vectors as the indices. > A[c(1,3),c(2,4)] [,1] [,2] [1,] 5 13 [2,] 7 15 > A[1:3,2:4] [,1] [,2] [,3] [1,] 5 9 13 [2,] 6 10 14 [3,] 7 11 15 > A[1:2,] [,1] [,2] [,3] [,4] [1,] 1 5 9 13 [2,] 2 6 10 14 > A[,1:2] [,1] [,2] [1,] 1 5 [2,] 2 6

63 48 2. Statistical Learning [3,] 3 7 [4,] 4 8 The last two examples include either no index for the columns or no index for the rows. These indicate that R should include all columns or all rows, respectively. R treats a single row or column of a matrix as a vector. >A[1,] [1] 1 5 9 13 The use of a negative sign R to keep all rows or columns - in the index tells except those indicated in the index. > A[-c(1,3),] [,1] [,2] [,3] [,4] [1,] 2 6 10 14 [2,] 4 8 12 16 > A[-c(1,3),-c(1,3,4)] [1] 6 8 dim() The function outputs the number of rows followed by the number of dim() columns of a given matrix. > dim(A) [1] 4 4 2.3.4 Loading Data For most analyses, the first step involves importing a data set into R .The function is one of the primary ways to do this. The help file read.table() read.table() contains details about how to use this function. We can use the function write.table() to export data. write. Before attempting to load a data set, we must make sure that R knows table() to search for the data in the proper directory. For example on a Windows system one could select the directory using the ... Change dir option under File menu. However, the details of how to do this depend on the op- the erating system (e.g. Windows, Mac, Unix) that is being used, and so we do not give further details here. We begin by loading in the Auto data set. ISLR library (we discuss libraries in Chapter 3) but This data is part of the to illustrate the read.table() function we load it now from a text file. The and store it as an Auto.data file into R following command will load the object called Auto , in a format referred to as a data frame .(Thetextfile data frame can be obtained from this book’s website.) Once the data has been loaded, fix() function can be used to view it in a spreadsheet like window. the However, the window must be closed before further R commands can be entered. > Auto=read.table("Auto.data") > fix(Auto)

64 2.3 Lab: Introduction to R 49 Note that Auto.data is simply a text file, which you could alternatively open on your computer using a standard text editor. It is often a good idea to view a data set using a text editor or other software such as Excel before . R loading it into t been loaded correctly, because This particular data set has no R has assumed that the variable names are part of the data and so has included them in the first row. The data set also includes a number of missing . Missing values are a common ? observations, indicated by a question mark header=T (or header=TRUE )in occurrence in real data sets. Using the option the function tells read.table() that the first line of the file contains the R variable names, and using the option tells that any time it na.strings R sees a particular character or set of c haracters (such as a question mark), it should be treated as a missing element of the data matrix. > Auto=read.table("Auto.data",header=T,na.strings="?") > fix(Auto) Excel is a common-format data storage program. An easy way to load such data into R is to save it as a csv (comma separated value) file and then use the read.csv() function to load it in. > Auto=read.csv("Auto.csv",header=T,na.strings="?") > fix(Auto) > dim(Auto) [1] 397 9 > Auto[1:4,] dim() function tells us that the data has 397 observations, or rows, and The dim() nine variables, or columns. There are various ways to deal with the missing data. In this case, only five of the rows contain missing observations, and na.omit() function to simply remove these rows. so we choose to use the na.omit() > Auto=na.omit(Auto) > dim(Auto) [1] 392 9 Once the data are loaded correctly, we can use names() to check the names() variable names. > names(Auto) displacement" " horsepower" [1] "mpg" "cylinders" " [5] "weight" " acceleration" " year" "origin" [9] "name" 2.3.5 Additional Graphical and Numerical Summaries We can use the plot() function to produce scatterplots of the quantitative scatterplot variables. However, simply typing the variable names will produce an error R does not know to look in the Auto data set for those message, because variables.

65 50 2. Statistical Learning > plot(cylinders, mpg) Error in plot(cylinders, mpg) : object ’cylinders’ not found To refer to a variable, we must type the data set and the variable name joined with a symbol. Alternatively, we can use the function in attach() $ attach() order to tell R to make the variables in this data frame available by name. > plot(Auto$cylinders , Auto$mpg) > attach(Auto) > plot(cylinders, mpg) cylinders variable is stored as a numeric vector, so R has treated it The as quantitative. However, since there are only a small number of possible values for , one may prefer to treat it as a qualitative variable. cylinders as.factor() function converts quantitative variables into qualitative The as.factor() variables. > cylinders=as.factor(cylinders) If the variable plotted on the x -axis is categorial, then boxplots will boxplot automatically be produced by the function. As usual, a number plot() of options can be specified in order to customize the plots. > plot(cylinders, mpg) > plot(cylinders, mpg, col="red") > plot(cylinders, mpg, col="red", varwidth=T) > plot(cylinders, mpg, col="red", varwidth=T,horizontal =T) > plot(cylinders, mpg, col="red", varwidth=T, xlab="cylinders", ylab="MPG") The hist() function can be used to plot a histogram .Notethat col=2 hist() col="red" . has the same effect as histogram > hist(mpg) > hist(mpg,col=2) > hist(mpg,col=2,breaks=15) pairs() function creates a scatterplot matrix i.e. a scatterplot for every The scatterplot pair of variables for any given data set. We can also produce scatterplots matrix for just a subset of the variables. > pairs(Auto) > pairs( ∼ mpg + displacement + horsepower + weight + acceleration , Auto) In conjunction with the plot() function, identify() provides a useful identify() interactive method for identifying the value for a particular variable for identify() :the x -axis points on a plot. We pass in three arguments to variable, the y -axis variable, and the variable whose values we would like to see printed for each point. Then clicking on a given point in the plot R to print the value of the variable of interest. Right-clicking on will cause the plot will exit the identify() function (control-click on a Mac). The identify() function correspond to the rows for numbers printed under the the selected points.

66 2.3 Lab: Introduction to R 51 > plot(horsepower ,mpg) > identify(horsepower ,mpg,name) function produces a numerical summary of each variable in The summary() summary() a particular data set. > summary(Auto) displacement mpg cylinders Min. : 9.00 Min. :3.000 Min. : 68.0 1st Qu.:17.00 1st Qu.:4.000 1st Qu.:105.0 Median :22.75 Median :4.000 Median :151.0 Mean :23.45 Mean :5.472 Mean :194.4 3rd Qu.:29.00 3rd Qu.:8.000 3rd Qu.:275.8 Max. :46.60 Max. :8.000 Max. :455.0 horsepower weight acceleration Min. : 46.0 Min. :1613 Min. : 8.00 1st Qu.: 75.0 1st Qu.:2225 1st Qu.:13.78 Median : 93.5 Median :2804 Median :15.50 Mean :104.5 Mean :2978 Mean :15.54 3rd Qu.:126.0 3rd Qu.:3615 3rd Qu.:17.02 Max. :230.0 Max. :5140 Max. :24.80 year origin name Min. :70.00 Min. :1.000 amc matador : 5 1st Qu.:73.00 1st Qu.:1.000 ford pinto : 5 Median :76.00 Median :1.000 toyota corolla : 5 Mean :75.98 Mean :1.577 amc gremlin : 4 3rd Qu.:79.00 3rd Qu.:2.000 amc hornet : 4 Max. :82.00 Max. :3.000 chevrolet chevette: 4 (Other) :365 For qualitative variables such as name , R will list the number of observations that fall in each category. We can also produce a summary of just a single variable. > summary(mpg) Min. 1st Qu. Median Mean 3rd Qu. Max. 9.00 17.00 22.75 23.45 29.00 46.60 R ,wetype q() inordertoshutitdown,or Once we have finished using q() quit. When exiting R , we have the option to save the current workspace so workspace that all objects (such as data sets) that we have created in this R session , we may want to save a record R will be available next time. Before exiting of all of the commands that we typed in the most recent session; this can , savehistory() function.Nexttimeweenter R be accomplished using the savehistory( ) we can load that history using the loadhistory() function. loadhistory( )

67 52 2. Statistical Learning 2.4 Exercises Conceptual 1. For each of parts (a) through (d), indicate whether we would generally expect the performance of a flexible statistical learning method to be better or worse than an inflexible method. Justify your answer. (a) The sample size n is extremely large, and the number of predic- tors p is small. p is extremely large, and the number (b) The number of predictors n of observations is small. (c) The relationship between the predictors and response is highly non-linear. 2 =Var( ), is extremely  (d) The variance of the error terms, i.e. σ high. 2. Explain whether each scenario is a classification or regression prob- lem, and indicate whether we are most interested in inference or pre- diction. Finally, provide n and p . (a) We collect a set of data on the top 500 firms in the US. For each firm we record profit, number of employees, industry and the CEO salary. We are interested in understanding which factors affect CEO salary. (b) We are considering launching a new product and wish to know success or a whether it will be a . We collect data on 20 failure similar products that were previously launched. For each prod- uct we have recorded whether it was a success or failure, price charged for the product, marketing budget, competition price, and ten other variables. (c) We are interesting in predicting the % change in the US dollar in relation to the weekly changes in the world stock markets. Hence we collect weekly data for all of 2012. For each week we record the % change in the dollar, the % change in the US market, the % change in the British market, and the % change in the German market. 3. We now revisit the bias-variance decomposition. (a) Provide a sketch of typical (squared) bias, variance, training er- ror, test error, and Bayes (or irreducible) error curves, on a sin- gle plot, as we go from less flexible statistical learning methods towards more flexible approaches. The x -axis should represent

68 2.4 Exercises 53 y -axis should the amount of flexibility in the method, and the represent the values for each curve. There should be five curves. Make sure to label each one. (b) Explain why each of the five curves has the shape displayed in part (a). 4. You will now think of some real-life applications for statistical learn- ing. (a) Describe three real-life applications in which classification might be useful. Describe the response, as well as the predictors. Is the goal of each application inference or prediction? Explain your answer. regression might (b) Describe three real-life applications in which be useful. Describe the response, as well as the predictors. Is the goal of each application inference or prediction? Explain your answer. (c) Describe three real-life applications in which cluster analysis might be useful. 5. What are the advantages and disadvantages of a very flexible (versus a less flexible) approach for regression or classification? Under what circumstances might a more flexible approach be preferred to a less flexible approach? When might a less flexible approach be preferred? arametric and a non-parametric 6. Describe the differences between a p statistical learning approach. What are the advantages of a para- metric approach to regression or classification (as opposed to a non- parametric approach)? What are its disadvantages? 7. The table below provides a training data set containing six observa- tions, three predictors, and one qualitative response variable. Obs. X Y X X 2 3 1 030Red 1 200Red 2 3 013Red 4 012Green − 1 0 1 Green 5 6 111Red Suppose we wish to use this data set to make a prediction for Y when X -nearest neighbors. = X K = X = 0 using 3 1 2 (a) Compute the Euclidean distanc e between each observation and the test point, X =0. = X X = 2 3 1

69 54 2. Statistical Learning K =1?Why? (b) What is our prediction with =3?Why? (c) What is our prediction with K (d) If the Bayes decision boundary in this problem is highly non- linear, then would we expect the best value for K to be large or small? Why? Applied data set, which can be found in College 8. This exercise relates to the College.csv . It contains a number of variables for 777 different the file universities and colleges in the US. The variables are • : Public/private indicator Private • Apps : Number of applications received Accept : Number of applicants accepted • • Enroll : Number of new students enrolled • Top10perc : New students from top 10 % of high school class • : New students from top 25 % of high school class Top25perc • F.Undergrad : Number of full-time undergraduates P.Undergrad : Number of part-time undergraduates • • Outstate : Out-of-state tuition • Room.Board : Room and board costs • Books : Estimated book costs Personal : Estimated personal spending • • PhD : Percent of faculty with Ph.D.’s • Terminal : Percent of faculty with terminal degree • S.F.Ratio : Student/faculty ratio perc.alumni : Percent of alumni who donate • • Expend : Instructional expenditure per student • Grad.Rate : Graduation rate Before reading the data into R , it can be viewed in Excel or a text editor. (a) Use the function to read the data into R .Callthe read.csv() loaded data college . Make sure that you have the directory set to the correct location for the data. fix() function. You should notice (b) Look at the data using the that the first column is just the name of each university. We don’t R to treat this as data. However, it may be handy to really want have these names for later. T ry the following commands:

70 2.4 Exercises 55 > rownames(college)=college[,1] > fix(college) column with the row.names You should see that there is now a R has given name of each university recorded. This means that each row a name corresponding to the appropriate university. R will not try to perform calculations on the row names. However, we still need to eliminate the first column in the data where the names are stored. Try > college=college[,-1] > fix(college) Now you should see that the first data column is Private .Note row.names now appears before the that another column labeled column. However, this is not a data column but rather Private thenamethat R is giving to each row. summary() function to produce a numerical summary (c) i. Use the of the variables in the data set. pairs() function to produce a scatterplot matrix of ii. Use the the first ten columns or variables of the data. Recall that you can reference the first ten columns of a matrix A using A[,1:10] . function to produce side-by-side boxplots of plot() iii. Use the Outstate Private . versus iv. Create a new qualitative variable, called Elite binning ,by the variable. We are going to divide universities Top10perc into two groups based on whether or not the proportion of students coming from the top 10 % of their high school classes exceeds 50 %. > Elite=rep("No",nrow(college)) > Elite[college$Top10perc >50]="Yes" > Elite=as.factor(Elite) > college=data.frame( college , Elite) Use the summary() function to see how many elite univer- sities there are. Now use the function to produce plot() side-by-side boxplots of Outstate versus Elite . v. Use the hist() function to produce some histograms with differing numbers of bins for a few of the quantitative vari- par(mfrow=c(2,2)) useful: ables. You may find the command it will divide the print window into four regions so that four plots can be made simultaneously. Modifying the arguments to this function will divide the screen in other ways. vi. Continue exploring the data, and provide a brief summary of what you discover.

71 56 2. Statistical Learning Auto data set studied in the lab. Make sure 9. This exercise involves the that the missing values have been removed from the data. (a) Which of the predictors are quantitative, and which are quali- tative? range of each quantitative predictor? You can an- (b) What is the swer this using the range() function. range() (c) What is the mean and standard deviation of each quantitative predictor? (d) Now remove the 10th through 85th observations. What is the range, mean, and standard deviation of each predictor in the subset of the data that remains? (e) Using the full data set, investigate the predictors graphically, using scatterplots or other tools of your choice. Create some plots highlighting the relationships among the predictors. Comment on your findings. mpg ) on the basis (f) Suppose that we wish to predict gas mileage ( of the other variables. Do your plots suggest that any of the mpg ? Justify your other variables might be useful in predicting answer. 10. This exercise involves the Boston housing data set. (a) To begin, load in the Boston Boston data set is data set. The part of the MASS in R . library > library(MASS) Now the data set is contained in the object Boston . > Boston Read about the data set: > ?Boston How many rows are in this data set? How many columns? What do the rows and columns represent? (b) Make some pairwise scatterplots of the predictors (columns) in this data set. Describe your findings. (c) Are any of the predictors associated with per capita crime rate? If so, explain the relationship. (d) Do any of the suburbs of Boston appear to have particularly high crime rates? Tax rates? Pupil-teacher ratios? Comment on the range of each predictor. (e) How many of the suburbs in this data set bound the Charles river?

72 2.4 Exercises 57 (f) What is the median pupil-teacher ratio among the towns in this data set? (g) Which suburb of Boston has lowest median value of owner- occupied homes? What are the values of the other predictors for that suburb, and how do those values compare to the overall ranges for those predictors? Comment on your findings. (h) In this data set, how many of the suburbs average more than seven rooms per dwelling? More than eight rooms per dwelling? Comment on the suburbs that average more than eight rooms per dwelling.

73

74 3 Linear Regression linear regression This chapter is about , a very simple approach for supervised learning. In particular, linear regression is a useful tool for pre- dicting a quantitative response. L inear regression has been around for a long time and is the topic of innumerable textbooks. Though it may seem somewhat dull compared to some of the more modern statistical learning approaches described in later chapters of this book, linear regression is still a useful and widely used statistical l earning method. Moreover, it serves as a good jumping-off point for newer approaches: as we will see in later chapters, many fancy statistical learning approaches can be seen as gener- alizations or extensions of linear regression. Consequently, the importance of having a good understanding of linear regression before studying more complex learning methods cannot be ove rstated. In this chapter, we review some of the key ideas underlying the linear regression model, as well as the least squares approach that is most commonly used to fit this model. sales Advertising data from Chapter 2. Figure 2.1 displays Recall the (in thousands of units) for a particular product as a function of advertis- ing budgets (in thousands of dollars) for TV , radio ,and media. newspaper Suppose that in our role as statistical consultants we are asked to suggest, on the basis of this data, a marketing plan for next year that will result in high product sales. What information would be useful in order to provide such a recommendation? Here are a few important questions that we might seek to address: Is there a relationship between advertising budget and sales? 1. Our first goal should be to determine whether the data provide G. James et al., An Introduction to Statistical Learning: with Applications in R 59 , 3, Springer Texts in Statistics 103, DOI 10.1007/978-1-4614-7138-7 © Springer Science+Business Media New York 2013

75 60 3. Linear Regression vertising expenditure and sales. evidence of an association between ad If the evidence is weak, then one might argue that no money should be spent on advertising! How strong is the relationship between advertising budget and sales? 2. Assuming that there is a relationship between advertising and sales, we would like to know the strength of this relationship. In other words, given a certain advertising budget, can we predict sales with a high level of accuracy? This wou ld be a strong relationship. Or is a prediction of sales based on advertising expenditure only slightly better than a random guess? This would be a weak relationship. Which media contribute to sales? 3. Do all three media—TV, radio, and newspaper—contribute to sales, or do just one or two of the media cont ribute? To answer this question, we must find a way to separate out the individual effects of each medium when we have spent money on all three media. How accurately can we estimate the effect of each medium on sales? 4. For every dollar spent on advertising in a particular medium, by what amount will sales increase? How accurately can we predict this amount of increase? 5. How accurately can we predict future sales? For any given level of television, radio, or newspaper advertising, what is our prediction for sales, and what is the accuracy of this prediction? 6. Is the relationship linear? If there is approximately a straight-line relationship between advertis- ing expenditure in the various media and sales, then linear regression is an appropriate tool. If not, then it may still be possible to trans- form the predictor or the respons e so that linear regression can be used. Is there synergy among the advertising media? 7. , 000 on television advertising and $50 , 000 on Perhaps spending $50 radio advertising results in more sales than allocating $100 , 000 to either television or radio individually. In marketing, this is known as synergy effect, while in statistics it is called an interaction effect. a synergy interaction It turns out that linear regression can be used to answer each of these questions. We will first discuss all of these questions in a general context, and then return to them in this s pecific context in Section 3.4.

76 3.1 Simple Linear Regression 61 3.1 Simple Linear Regression Simple linear regression lives up to its name: it is a very straightforward simple linear on the basis of a sin- Y approach for predicting a quantitative response regression gle predictor variable X . It assumes that there is approximately a linear . Mathematically, we can write this linear relationship between X and Y relationship as + β X. (3.1) ≈ Y β 1 0 You might read “ ≈ ”as “is approximately modeled as” . We will sometimes Y on X (or Y onto X ). describe (3.1) by saying that we are regressing advertising and Y may represent sales . TV may represent X For example, Then we can regress TV by fitting the model sales onto + β . × TV sales ≈ β 1 0 In Equation 3.1, β β are two unknown constants that represent and 1 0 and slope terms in the linear model. Together, the intercept β and β are 1 0 intercept known as the model coefficients or parameters . Oncewehaveusedour slope ˆ ˆ β and training data to produce estimates β for the model coefficients, we 0 1 coefficient can predict future sales on the basis of a particular value of TV advertising parameter by computing ˆ ˆ + β x, (3.2) y = β ˆ 0 1 where ˆ y Y on the basis of X = x .Hereweusea indicates a prediction of symbol, ˆ , to denote the estimated value for an unknown parameter hat or coefficient, or to denote the predicted value of the response. 3.1.1 Estimating the Coefficients β and β are unknown. So before we can use (3.1) to make In practice, 0 1 o estimate the coefficients. Let predictions, we must use data t ( x ,y ,y ) , ( x ) ,y x ) ,..., ( 2 2 1 n 1 n n observation pairs, each of which consists of a measurement represent Advertising example, this data X and a measurement of Y .Inthe of set consists of the TV advertising budget and product sales in n = 200 different markets. (Recall that the data are displayed in Figure 2.1.) Our ˆ ˆ β and such that the linear model β goal is to obtain coefficient estimates 0 1 ˆ ˆ for ≈ = β x + i β y (3.1) fits the available data well—that is, so that 1 0 i i ˆ ˆ 1 . In other words, we want to find an intercept β ,...,n and a slope β such 1 0 that the resulting line is as close as possible to the n = 200 data points. There are a number of ways of measuring closeness . However, by far the most common approach involves minimizing the least squares criterion, least squares and we take that approach in this chapter. Alternative approaches will be considered in Chapter 6.

77 62 3. Linear Regression Sales 510152025 100 0 50 300 150 250 200 TV data, the least squares fit for the regression Advertising FIGURE 3.1. For the TV is shown. The fit is found by minimizing the sum of squared of sales onto errors. Each grey line segment represents an error, and the fit makes a compro- mise by averaging their squares. In this case a linear fit captures the essence of the relationship, although it is somewhat deficient in the left of the plot. ˆ ˆ Let ˆ β y + = β . x X be the prediction for Y based on the i th value of i 0 1 i e Then represents the − = y y i th residual —this is the difference between ˆ i i i residual the i th response value that is predicted th observed response value and the i by our linear model. We define the residual sum of squares (RSS) as residual sum of squares 2 2 2 + e e , + ··· + e RSS = n 1 2 or equivalently as 2 2 2 ˆ ˆ ˆ ˆ ˆ ˆ y RSS = ( x ) ) + +( y x − β β β − . β − x − ) (3.3) β ... +( y β − − n 0 0 1 2 n 1 1 0 1 1 2 ˆ ˆ β The least squares approach chooses and β to minimize the RSS. Using 0 1 some calculus, one can show that the minimizers are ∑ n − ) y ̄ ( x − y ̄ x )( i i =1 i ˆ ∑ = , β 1 n 2 ) − ̄ x ( x i (3.4) =1 i ˆ ˆ y − ̄ β x, = ̄ β 1 0 ∑ ∑ n n 1 1 where ̄ ≡ y and ̄ x ≡ are the sample means. In other x y i i i i =1 =1 n n words, (3.4) defines the least squares coefficient estimates for simple linear regression. Advertising Figure 3.1 displays the simple linear regression fit to the ˆ ˆ data, where . =7 . 03 and 0475. In other words, according to β =0 β 1 0

78 3.1 Simple Linear Regression 63 3 3 2.5 0.06 2.15 0.05 1 β RSS 2.2 0.04 2.3 β 1 3 0.03 3 β 0 56789 β 0 Contour and three-dimensional plots of the RSS on the FIGURE 3.2. Advertising data, using sales as the response and TV as the predictor. The ˆ ˆ and β , given by (3.4). β red dots correspond to the least squares estimates 1 0 , 000 spent on TV advertising is asso- this approximation, an additional $1 ciated with selling approximately 47 . 5 additional units of the product. In , β and β Figure 3.2, we have computed RSS for a number of values of 0 1 using the advertising data with as the response and TV as the predic- sales ts the pair of least squares estimates tor. In each plot, the red dot represen ˆ ˆ , β ) given by (3.4). These values clearly minimize the RSS. β ( 0 1 3.1.2 Assessing the Accuracy of the Coefficient Estimates Recall from (2.1) that we assume that the true relationship between X and Y takes the form = f ( X )+  for some unknown function f ,where  Y ndom error term. If is to be approximated by a linear is a mean-zero ra f function, then we can write this relationship as + . (3.5) + β X = β Y 1 0 β Here is the intercept term—that is, the expected value of Y when X =0, 0 and β is the slope—the average increase in Y associated with a one-unit 1 . The error term is a catch-all for what we miss with this X increase in simple model: the true relationship is probably not linear, there may be other variables that cause variation in Y , and there may be measurement X . error. We typically assume that the error term is independent of population regression line ,which The model given by (3.5) defines the population and is the best linear approximation to the true relationship between X regression 1 line . Y The least squares regression coefficien t estimates (3.4) c haracterize the least squares line (3.2). The left-hand panel of Figure 3.3 displays these least squares line 1 The assumption of linearity is often a useful working model. However, despite what many textbooks might tell us, we seldom believe that the true relationship is linear.

79 64 3. Linear Regression 10 10 5 5 Y Y 0 0 −5 −5 −10 −10 1 2 −2 0 −1 −1 0 1 2 −2 X X A simulated data set. Left: The red line represents the true rela- FIGURE 3.3. f , which is known as the population regression line. The X )=2+3 X tionship, ( f blue line is the least squares line; it is the least squares estimate for X ) based ( on the observed data, shown in black. Right: The population regression line is again shown in red, and the least squares line in dark blue. In light blue, ten least squares lines are shown, each computed on the basis of a separate random set of observations. Each least squares line is different, but on average, the least squares lines are quite close to the population regression line. two lines in a simple simulated example. We created 100 random s, and X generated 100 corresponding Y s from the model =2+3 X + Y (3.6) , where  was generated from a normal dis tribution with mean zero. The red line in the left-hand panel of Figure 3.3 displays the true relationship, f ( X ) = 2+3 X , while the blue line is the least squares estimate based on the observed data. The true relationship is generally not known for real data, but the least squares line can always be computed using the coefficient estimates given in (3.4). In other words, in real applications, we have access to a set of observations from which we can compute the least squares line; however, the population regression line is unobserved. In the right-hand panel of Figure 3.3 we have generated ten different data sets from the model given by (3.6) and plotted the corresponding ten least squares lines. Notice that different d ata sets generated from the same true model result in slightly different least squares lines, but the unobserved population regression line does not change. At first glance, the difference between the population regression line and the least squares line may seem subtle and confusing. We only have one data set, and so what does it mean that two different lines describe the relationship between the predictor and the response? Fundamentally, the

80 3.1 Simple Linear Regression 65 concept of these two lines is a natural extension of the standard statistical approach of using information from a sample to estimate characteristics of a large population. For example, suppose that we are interested in knowing of some random variable the population mean μ is Y μ . Unfortunately, Y ,whichwecan n unknown, but we do have access to observations from . A reasonable ,...,y μ , and which we can use to estimate y write as n 1 ∑ n 1 isthesamplemean.Thesample y estimate is ˆ μ = ̄ y ,where ̄ y = i =1 i n mean and the population mean are different, but in general the sample mean will provide a good estimate of the population mean. In the same and β in linear regression define the β way, the unknown coefficients 1 0 population regression line. We seek to estimate these unknown coefficients ˆ ˆ given in (3.4). These coeffici β and ent estimates define the least using β 0 1 squares line. The analogy between linear regression and estimation of the mean of a bias random variable is an apt one based on the concept of .Ifweusethe bias μ μ , this estimate is unbiased , in the sense that sample mean ˆ to estimate unbiased μ to equal μ . What exactly does this mean? It means on average, we expect ˆ that on the basis of one particular set of observations y might ,...,y μ ,ˆ n 1 μ might μ , and on the basis of another set of observations, ˆ overestimate underestimate μ . But if we could average a huge number of estimates of μ obtained from a huge number of sets of observations, then this average equal μ . Hence, an unbiased estimator does not systematically would exactly over- or under-estimate the true parameter. The property of unbiasedness holds for the least squares coefficient estimates given by (3.4) as well: if and β on the basis of a particular data set, then our we estimate β 1 0 β estimates won’t be exactly equal to and β . But if we could average 1 0 the estimates obtained over a huge n umber of data sets, then the average of these estimates would be spot on! In fact, we can see from the right- hand panel of Figure 3.3 that the average of many least squares lines, each estimated from a separate data set, is pretty close to the true population regression line. We continue the analogy with the estimation of the population mean μ of a random variable Y . A natural question is as follows: how accurate is the sample mean ˆ μ μ ? We have established that the as an estimate of μ μ , but that a average of ˆ ’s over many data sets will be very close to μ single estimate ˆ μ . may be a substantial underestimate or overestimate of How far off will that single estimate of ˆ μ be? In general, we answer this question by computing the standard error of ˆ μ , written as SE(ˆ μ ). We have standard the well-known formula error 2 σ 2 (3.7) , = μ )=SE(ˆ μ Var( ˆ ) n

81 66 3. Linear Regression 2 σ of Y . y where is the standard deviation of each of the realizations i Roughly speaking, the standard error tells us the average amount that this . Equation 3.7 also tells us how differs from the actual value of μ μ estimate ˆ n this deviation shrinks with —the more observations we have, the smaller ˆ β μ . In a similar vein, we can wonder how close the standard error of ˆ 0 ˆ β β β and are to the true values . To compute the standard errors and 1 1 0 ˆ ˆ β , we use the following formulas: and associated with β 0 1 ] [ 2 2 2 2 1 ̄ x σ 2 ˆ ˆ ∑ ∑ SE( β + σ (3.8) = = , ) ) SE( , β 1 0 n n 2 2 x n − ̄ x ) ( ) ( x x − ̄ i i =1 i i =1 2  ). For these formulas to be strictly valid, we need to as- =Var( σ where for each observation are uncorrelated with common  sume that the errors i 2 σ variance . This is clearly not true in Figure 3.1, but the formula still ˆ )is turns out to be a good approximation. Notice in the formula that SE( β 1 smaller when the x are more spread out; intuitively we have more leverage i ˆ to estimate a slope when this is the case. We also see that SE( β )wouldbe 0 ˆ ). would be equal to ̄ y were zero (in which case the same as SE(ˆ β μ )if ̄ x 0 2 σ In general, is not known, but can be estimated from the data. This esti- residual standard error mate is known as the , and is given by the formula √ residual 2 RSS / ( n is estimated from the 2). Strictly speaking, when RSE = − σ standard error ˆ ̂ ) to indicate that an estimate has been made, SE( data we should write β 1 but for simplicity of notation we will drop this extra “hat”. confidence intervals .A95% Standard errors can be used to compute confidence confidence interval is defined as a range of values such that with 95 % interval probability, the range will contain the true unknown value of the parameter. The range is defined in terms of lower and upper limits computed from the sample of data. For linear regression, the 95 % confidence interval for β 1 approximately takes the form ˆ ˆ β (3.9) · SE( ± β 2 ) . 1 1 That is, there is approximately a 95 % chance that the interval [ ] ˆ ˆ ˆ ˆ β SE( (3.10) β − ) , 2 β ) +2 · SE( · β 1 1 1 1 3 Similarly, a confidence interval for . β β will contain the true value of 1 0 approximately takes the form ˆ ˆ β ± 2 · SE( (3.11) β . ) 0 0 2 This formula holds provided that the n observations are uncorrelated. 3 Approximately for several reasons. Equation 3.10 relies on the assumption that the ˆ ) term will vary slightly β errors are Gaussian. Also, the factor of 2 in front of the SE( 1 depending on the number of observations n in the linear regression. To be precise, rather than the number 2, (3.10) should contain the 97.5 % quantile of a t -distribution with n − 2 degrees of freedom. Details of how to compute the 95 % confidence interval precisely R will be provided later in this chapter. in

82 3.1 Simple Linear Regression 67 In the case of the advertising data, the 95 % confidence interval for β 0 . , . 935] and the 95% confidence interval for β is [6 is [0 . 042 . 0 7 053]. 130 , 1 Therefore, we can conclude that in the absence of any advertising, sales will, on average, fall somewhere between 6 , 130 and 7 , 940 units. Furthermore, 000 increase in television advertising, there will be an average for each $1 , increase in sales of between 42 and 53 units. on the Standard errors can also be used to perform hypothesis tests hypothesis coefficients. The most common hypot null hesis test involves testing the test hypothesis of null hypothesis H : There is no relationship between X and Y (3.12) 0 alternative hypothesis versus the alternative hypothesis : There is some relationship between X and Y. (3.13) H a Mathematically, this corresponds to testing H : β =0 1 0 versus H : β ,  =0 1 a β since if = 0 then the model (3.5) reduces to Y = β is +  ,and X 0 1 Y . To test the null hypothesis, we need to determine not associated with ˆ , our estimate for β , is sufficiently far from zero that we can β whether 1 1 be confident that β is non-zero. How far is far enough? This of course 1 ˆ ˆ ˆ depends on the accuracy of β β β —that is, it depends on SE( )is ). If SE( 1 1 1 ˆ may provide strong evidence β small, then even relatively small values of 1 that β  .In X and Y = 0, and hence that there is a relationship between 1 ˆ ˆ β contrast, if SE( must be large in absolute value in order β ) is large, then 1 1 t-statistic , for us to reject the null hypothesis. In practice, we compute a t-statistic given by ˆ 0 − β 1 (3.14) , = t ˆ β ) SE( 1 ˆ β which measures the number of standard deviations that is away from 1 0. If there really is no relationship between X and Y , then we expect that (3.14) will have a t -distribution with n − 2 degrees of freedom. The t- distribution has a bell shape and for values of greater than approximately n 30 it is quite similar to the normal distribution. Consequently, it is a simple | t | or matter to compute the probability of observing any value equal to = 0. We call this probability the p-value . Roughly β larger, assuming 1 p-value speaking, we interpret the p-value as follows: a small p-value indicates that it is unlikely to observe such a substant ial association between the predictor and the response due to chance, in the absence of any real association between the predictor and the respon se. Hence, if we see a small p-value,

83 68 3. Linear Regression then we can infer that there is an associ ation between the predictor and the reject the null hypothesis response. We —that is, we declare a relationship Y —if the p-value is small enough. Typical p-value and X to exist between n = 30, these cutoffs for rejecting the null hypothesis are 5 or 1 %. When correspond to t-statistics (3.14) of around 2 and 2.75, respectively. Coefficient Std. error t-statistic p-value 0.4578 15.36 Intercept 0 . 0001 7.0325 < 0.0475 17.67 < 0 . 0.0027 TV 0001 For the Advertising data, coefficients of the least squares model TABLE 3.1. for the regression of number of units sold on TV advertising budget. An increase $1 in the TV advertising budget is associated with an increase in sales by 000 , of around 50 units (Recall that the sales variable is in thousands of units, and the TV variable is in thousands of dollars). Table 3.1 provides details of the least squares model for the regression of number of units sold on TV advertising budget for the data. Advertising ˆ ˆ β and are very large relative to their β Notice that the coefficients for 0 1 standard errors, so the t-statistics are also large; the probabilities of seeing such values if H is true are virtually zero. Hence we can conclude that 0 4  =0and β  =0. β 1 0 3.1.3 Assessing the Accuracy of the Model Once we have rejected the null hypothesis (3.12) in favor of the alternative the extent to which the hypothesis (3.13), it is natural to want to quantify model fits the data . The quality of a linear regression fit is typically assessed 2 R (RSE) and the residual standard error using two related quantities: the 2 R statistic. 2 Table 3.2 displays the RSE, the R statistic, and the F-statistic (to be described in Section 3.2.2) for the linear regression of number of units sold on TV advertising budget. Residual Standard Error Recall from the model (3.5) that asso ciated with each observation is an error term  . Due to the presence of these error terms, even if we knew the were known), we would not be β and β true regression line (i.e. even if 0 1 Y able to perfectly predict X . The RSE is an estimate of the standard from 4 In Table 3.1, a small p-value for the intercept indicates that we can reject the null indicates that we can reject the null TV = 0, and a small p-value for hypothesis that β 0 = 0. Rejecting the latter null hypothesis allows us to conclude that hypothesis that β 1 . Rejecting the former allows us to conclude TV and sales there is a relationship between TV expenditure, sales are non-zero. that in the absence of

84 3.1 Simple Linear Regression 69 Value Quantity Residual standard error 3.26 2 R 0.612 312.1 F-statistic For the Advertising data, more information about the least squares TABLE 3.2. model for the regression of number of units sold on TV advertising budget. . Roughly speaking, it is the average amount that the response  deviation of will deviate from the true regression line. It is computed using the formula √ √ √ n ∑ √ 1 1 √ 2 RSE = RSS = ) y ˆ (3.15) . − ( y i i n 2 − n − 2 i =1 Note that RSS was defined in Section 3.1.1, and is given by the formula n ∑ 2 RSS = ( y (3.16) − ˆ . ) y i i =1 i In the case of the advertising data, we see from the linear regression output in Table 3.2 that the RSE is 3 . 26. In other words, actual sales in each market deviate from the true regression line by approximately 3 , 260 units, on average. Another way to think about this is that even if the model were correct and the true va β lues of the unknown coefficients 0 and were known exactly, any prediction of sales on the basis of TV β 1 , advertising would still be off by about 3 260 units on average. Of course, whether or not 3 , 260 units is an acceptable prediction error depends on the sales over problem context. In the advertising data set, the mean value of all markets is approximately 14 000 units, and so the percentage error is , 000=23%. , / 3 , 260 14 The RSE is considered a measure of the lack of fit of the model (3.5) to the data. If the predictions obtained using the model are very close to the —then (3.15) will ,...,n =1 ≈ y i for true outcome values—that is, if ˆ y i i be small, and we can conclude that the model fits the data very well. On for one or more observations, then y is very far from y the other hand, if ˆ i i the RSE may be quite large, indicating that the model doesn’t fit the data well. 2 Statistic R The RSE provides an absolute measure of lack of fit of the model (3.5) to the data. But since it is measured in the units of Y , it is not always 2 statistic provides an alternative R clear what constitutes a good RSE. The measure of fit. It takes the form of a proportion —the proportion of variance explained—and so it always takes on a value between 0 and 1, and is independent of the scale of Y .

85 70 3. Linear Regression 2 R To calculate ,weusetheformula RSS TSS RSS − 2 R − (3.17) =1 = TSS TSS ∑ 2 ( y where TSS = y ) is the total sum of squares , and RSS is defined ̄ − i total sum of Y ,andcanbe in (3.16). TSS measures the total variance in the response squares thought of as the amount of variability inherent in the response before the regression is performed. In contrast, RSS measures the amount of variability − that is left unexplained after performing the regression. Hence, TSS RSS measures the amount of variability in the response that is explained (or 2 measures the proportion R removed) by performing the regression, and 2 that can be explained using of variability in .An R Y X statistic that is close to 1 indicates that a large proportion of the variability in the response has been explained by the regression. A number near 0 indicates that the regression did not explain much of the variability in the response; this might 2 is high, σ occur because the linear model is wrong, or the inherent error 2 or both. In Table 3.2, the R was 0 . 61, and so just under two-thirds of the sales is explained by a linear regression on TV . variability in 2 R The statistic (3.17) has an interpretational advantage over the RSE (3.15), since unlike the RSE, it always lies between 0 and 1. However, it can 2 value, and in general, still be challenging to determine what is a R good this will depend on the application. For instance, in certain problems in physics, we may know that the data truly comes from a linear model with 2 value that a small residual error. In this case, we would expect to see an R 2 is extremely close to 1, and a substantially smaller R value might indicate a serious problem with the experiment i n which the data were generated. On the other hand, in typical applications in biology, psychology, marketing, and other domains, the linear model (3.5) is at best an extremely rough approximation to the data, and residual errors due to other unmeasured factors are often very large. In this setting, we would expect only a very small proportion of the variance in the response to be explained by the 2 value well below 0 . 1 might be more realistic! R predictor, and an 2 R The X and statistic is a measure of the linear relationship between correlation , defined as . Recall that Y correlation ∑ n x y ) ( x − − y )( i i i =1 √ √ X,Y )= Cor( (3.18) , ∑ ∑ n n 2 2 ) x x − ( ) y ( y − i i =1 i i =1 5 This sug- Y . is also a measure of the linear relationship between X and 2 r =Cor( X,Y ) instead of R gests that we might be able to use in order to assess the fit of the linear model. In fact, it can be shown that in the simple 2 2 . In other words, the squared correlation = r R linear regression setting, 5 We note that in fact, the right-hand side of (3.18) is the sample correlation; thus, ̂ it would be more correct to write Cor( X, Y ); however, we omit the “hat” for ease of notation.

86 3.2 Multiple Linear Regression 71 2 R and the statistic are identical. However, in the next section we will discuss the multiple linear regression problem, in which we use several pre- e response. The concept of correlation dictors simultaneously to predict th between the predictors and the response does not extend automatically to this setting, since correlation quanti fies the association between a single pair of variables rather than between a larger number of variables. We will 2 fills this role. see that R 3.2 Multiple Linear Regression Simple linear regression is a useful approach for predicting a response on the basis of a single predictor variable. However, in practice we often have more than one predictor. For example, in the Advertising data,wehaveexamined the relationship between sales and TV advertising. We also have data for the amount of money spent advertising on the radio and in newspapers, and we may want to know whether either of these two media is associated with sales. How can we extend our analysis of the advertising data in order to accommodate these two additional predictors? One option is to run three separate simple linear regressions, each of which uses a different advertising medium as a predictor. For instance, we can fit a simple linear regression to predict sales on the basis of the amount spent on radio advertisements. Results are shown in Table 3.3 (top , 000 increase in spending on radio advertising is table). We find that a $1 associated with an increase in sales by around 203 units. Table 3.3 (bottom table) contains the least squares coeffic ients for a simple linear regression of sales onto newspaper advertising budget. A $1 , 000 increase in newspaper advertising budget is associated with an increase in sales by approximately 55 units. However, the approach of fitting a separate simple linear regression model for each predictor is not entirely satisfactory. First of all, it is unclear how to make a single prediction of sales given levels of the three advertising media budgets, since each of the budgets is associated with a separate regression equation. Second, each of the three regr ession equations ignores the other two media in forming estimates for the regression coefficients. We will see shortly that if the media budgets are correlated with each other in the 200 markets that constitute our data set, then this can lead to very misleading estimates of the individual media effects on sales. Instead of fitting a separate simple linear regression model for each pre- dictor, a better approach is to extend the simple linear regression model (3.5) so that it can directly accommodate multiple predictors. We can do this by giving each predictor a separate slope coefficient in a single model. In general, suppose that we have p distinct predictors. Then the multiple linear regression model takes the form , + (3.19) X + β β X + + β ··· X + = Y β 2 2 1 1 p 0 p

87 72 3. Linear Regression sales on Simple regression of radio Coefficient Std. error t-statistic p-value 9.312 < 0 . 0001 0.563 Intercept 16.54 9.92 < 0 . 0001 0.203 radio 0.020 on newspaper Simple regression of sales Coefficient Std. error t-statistic p-value 12.351 19.88 < 0 . 0001 Intercept 0.621 0.017 3.30 < 0 . 0001 newspaper 0.055 More simple linear regression models for the Advertising TABLE 3.3. data. Co- efficients of the simple linear regression model for number of units sold on Top: Bottom: newspaper advertising budget. A 1 , 000 in- radio advertising budget and $ crease in spending on radio advertising is associated with an average increase in sales by around 203 units, while the same increase in spending on newspaper ad- vertising is associated with an average increase in sales by around 55 units (Note sales variable is in thousands of units, and the radio and newspaper that the variables are in thousands of dollars). X represents the where j th predictor and β quantifies the association j j as the average β between that variable and the response. We interpret j Y of a one unit increase in effect on X , holding all other predictors fixed . j In the advertising example, (3.19) becomes sales = β (3.20) + β . × TV + β + × radio + β newspaper × 3 2 0 1 3.2.1 Estimating the Regression Coefficients As was the case in the simple linear reg ression setting, the regression coef- ficients β ,...,β ,β in (3.19) are unknown, and must be estimated. Given p 0 1 ˆ ˆ ˆ β estimates , ,..., β β , we can make predictions using the formula p 1 0 ˆ ˆ ˆ ˆ + (3.21) β x + . β x x + ··· + β ˆ β = y 1 p 2 p 1 0 2 The parameters are estimated using the same least squares approach that we saw in the context of simple linear regression. We choose β ,...,β ,β 0 p 1 to minimize the sum of squared residuals n ∑ 2 RSS = ( y ) − ˆ y i i =1 i n ∑ 2 ˆ ˆ ˆ ˆ ) = . β x − (3.22) β β x ( −···− − y β x − 2 2 1 i 1 i 0 ip i p =1 i

88 3.2 Multiple Linear Regression 73 Y X 2 X 1 FIGURE 3.4. In a three-dimensional setting, with two predictors and one re- sponse, the least squares regression line becomes a plane. The plane is chosen to minimize the sum of the squared vertical distances between each observation (shown in red) and the plane. ˆ ˆ ˆ β , ,..., The values β that minimize (3.22) are the multiple least squares β p 0 1 stimates. Unlike the simple linear regression regression coefficient e e regression coefficient estimates have estimates given in (3.4), the multipl somewhat complicated forms that are most easily represented using ma- trix algebra. For this reason, we do not provide them here. Any statistical e these coefficient estimates, and software package can be used to comput R . Figure 3.4 later in this chapter we will show how this can be done in illustrates an example of the least squares fit to a toy data set with p =2 predictors. Table 3.4 displays the multiple regre imates when TV, ssion coefficient est radio, and newspaper advertising budgets are used to predict product sales Advertising ults as follows: for a given data. We interpret these res using the amount of TV and newspaper advertising, spending an additional $1 , 000 on radio advertising leads to an increase in sales by approximately 189 units. Comparing these coefficient estimates to those displayed in Tables 3.1 and 3.3, we notice that the multiple regression coefficient estimates for and radio are pretty similar to the simple linear regression coefficient TV estimates. However, while the newspaper regression coeffici ent estimate in newspaper Table 3.3 was significantly non-zero, the coefficient estimate for in the multiple regression model is close to zero, and the corresponding p-value is no longer significant, with a value around 0 . 86. This illustrates

89 74 3. Linear Regression Coefficient Std. error t-statistic p-value Intercept 9.42 < 0 . 0001 2.939 0.3119 0.0014 0 < 0.046 . 0001 TV 32.81 0.189 21.89 < 0 . 0001 radio 0.0086 − 0.001 0.0059 − newspaper 0 . 8599 0.18 TABLE 3.4. Advertising data, least squares coefficient estimates of the For the multiple linear regression of number of units sold on radio, TV, and newspaper advertising budgets. on coefficients can be quite different. that the simple and multiple regressi This difference stems from the fact that in the simple regression case, the slope term represents the average effect of a $1 , 000 increase in newspaper TV and radio . In contrast, in advertising, ignoring other predictors such as the multiple regression setting, the coefficient for newspaper represents the TV average effect of increasing newspaper spending by $1 , 000 while holding and radio fixed. Does it make sense for the multiple regression to suggest no relationship and newspaper while the simple linear regression implies the sales between opposite? In fact it does. Consider the correlation matrix for the three predictor variables and response variable, displayed in Table 3.5. Notice radio and newspaper is 0 . 35. This reveals a that the correlation between tendency to spend more on newspaper advertising in markets where more is spent on radio advertising. Now suppose that the multiple regression is rtising has no direct impact on sales, but radio correct and newspaper adve advertising does increase sales. Then in markets where we spend more on radio our sales will tend to be higher, and as our correlation matrix shows, we also tend to spend more on n ewspaper advertising in those same sales markets. Hence, in a simple linear regression which only examines versus newspaper , we will observe that higher values of newspaper tend to be associated with higher values of sales , even though newspaper advertising newspaper radio sales are a surrogate for does not actually affect sales. So newspaper gets “credit” for the effect of advertising; on sales . radio This slightly counterintuitive result is very common in many real life situations. Consider an absurd example to illustrate the point. Running a regression of shark attacks versus i ce cream sales for data collected at a given beach community over a period of time would show a positive sales and newspaper .Ofcourse relationship, similar to that seen between no one (yet) has suggested that ice creams should be banned at beaches to reduce shark attacks. In reality, higher temperatures cause more people to visit the beach, which in turn results in more ice cream sales and more shark attacks. A multiple regression of attacks versus ice cream sales and temperature reveals that, as intuition implies, the former predictor is no longer significant after adjusting for temperature.

90 3.2 Multiple Linear Regression 75 TV radio newspaper sales TV 1.0000 0.0548 0.0567 0.7822 1.0000 0.3541 0.5762 radio newspaper 1.0000 0.2283 sales 1.0000 TV , radio , newspaper ,and sales for the TABLE 3.5. Correlation matrix for data. Advertising 3.2.2 Some Important Questions When we perform multiple linear regre ssion, we usually are interested in answering a few important questions. ,X useful in predicting ,...,X Is at least one of the predictors X 1. p 1 2 the response? 2. Y , or is only a subset of the Do all the predictors help to explain predictors useful? 3. How well does the model fit the data? 4. Given a set of predictor values, what response value should we predict, and how accurate is our prediction? We now address each of these questions in turn. One: Is There a Relationship Betw een the Response and Predictors? Recall that in the simple linear regression setting, in order to determine whether there is a relationship between the response and the predictor we = 0. In the multiple regression setting with p can simply check whether β 1 predictors, we need to ask whether all o f the regression coefficients are zero, = = β = 0. As in the simple linear regression = ··· β i.e. whether β 1 p 2 setting, we use a hypothesis test to answer this question. We test the null hypothesis, =0 β : β = = β ··· = H 2 p 1 0 versus the alternative H :atleastone β is non-zero. j a This hypothesis test is performed by computing the F-statistic , F-statistic (TSS − RSS) /p (3.23) , = F RSS / ( n − p − 1)

91 76 3. Linear Regression Value Quantity 1.69 Residual standard error 2 R 0.897 570 F-statistic TABLE 3.6. More information about the least squares model for the regression of number of units sold on TV, newspaper, and radio advertising budgets in the data. Other information about this model was displayed in Table 3.4. Advertising ∑ 2 where, as with simple linear regression, TSS = − ̄ y ) ( and RSS = y i ∑ 2 ( y y ) − ˆ . If the linear model assumption s are correct, one can show that i i 2 RSS / ( n − E − 1) } = σ { p and that, provided H is true, 0 2 { RSS) /p } = σ E (TSS − . etween the response and predictors, Hence, when there is no relationship b one would expect the F-statistic to take on a value close to 1. On the other 2 E { (TSS to be RSS) /p } >σ is true, then , so we expect F − hand, if H a greater than 1. The F-statistic for the multiple linear regression model obtained by re- is shown in Table 3.6. In this sales onto radio , TV ,and newspaper gressing example the F-statistic is 570. Since this is far larger than 1, it provides . In other words, the H compelling evidence against the null hypothesis 0 large F-statistic suggests that at least one of the advertising media must sales . However, what if the F-statistic had been closer to be related to H 1? How large does the F-statistic need to be before we can reject and 0 conclude that there is a relationship? It turns out that the answer depends n and p .When n is large, an F-statistic that is just a on the values of . In contrast, H little larger than 1 might still provide evidence against 0 is true H if n is small. When H a larger F-statistic is needed to reject 0 0 and the errors  have a normal distribution, the F-statistic follows an i 6 F-distribution. For any given value of n and p , any statistical software package can be used to compute the p-value associated with the F-statistic using this distribution. Based on this p-value, we can determine whether . For the advertising data, the p-value associated with H or not to reject 0 the F-statistic in Table 3.6 is essentially zero, so we have extremely strong evidence that at least one of the med ia is associated with increased sales . that all the coefficients are zero. Sometimes In (3.23) we are testing H 0 we want to test that a particular subset of q of the coefficients are zero. This corresponds to a null hypothesis H , : β =0 β = ... = β = p − +2 p +1 q − q p 0 6 Even if the errors are not normally-distributed, the F-statistic approximately follows an F-distribution provided that the sample size n is large.

92 3.2 Multiple Linear Regression 77 where for convenience we have put the variables chosen for omission at the end of the list. In this case we fit a second model that uses all the variables . Suppose that the residual sum of squares for that model those last except q . Then the appropriate F-statistic is is RSS 0 /q − RSS) (RSS 0 . (3.24) = F − p − 1) / RSS n ( Notice that in Table 3.4, for each individual predictor a t-statistic and a p-value were reported. These provide information about whether each individual predictor is related to the response, after adjusting for the other 7 predictors. It turns out that each of these are exactly equivalent to the F-test that omits that single variable from the model, leaving all the others q in—i.e. of adding that variable =1 in (3.24). So it reports the partial effect to the model. For instance, as we discussed earlier, these p-values indicate TV radio are related to sales , but that there is no evidence that and that is associated with sales , in the presence of these two. newspaper Given these individual p-values for each variable, why do we need to look at the overall F-statistic? After all, it seems likely that if any one of the at least one of the p-values for the individual variables is very small, then predictors is related to the response . However, this logic is flawed, especially p is large. when the number of predictors = : β β = p = 100 and H For instance, consider an example in which 1 2 0 = 0 is true, so no variable is truly associated with the response. In = ... β p this situation, about 5 % of the p-values associated with each variable (of the type shown in Table 3.4) will be below 0 . 05 by chance. In other words, small we expect to see approximately five p-values even in the absence of any true association between the predi ctors and the response. In fact, we . 05 are almost guaranteed that we will observe at least one p-value below 0 by chance! Hence, if we use the individual t-statistics and associated p- values in order to decide whether or not there is any association between the variables and the response, there is a very high chance that we will incorrectly conclude that there is a rela tionship. However, the F-statistic does not suffer from this problem because it adjusts for the number of is true, there is only a 5 % chance that the F- predictors. Hence, if H 0 statistic will result in a p-value below 0 . 05, regardless of the number of predictors or the number of observations. The approach of using an F-statistic to test for any association between the predictors and the response works when p is relatively small, and cer- tainly small compared to n . However, sometimes we have a very large num- to estimate ber of variables. If p>n then there are more coefficients β j than observations from which to estimate them. In this case we cannot even fit the multiple linear regression model using least squares, so the 7 The square of each t-statistic is the corresponding F-statistic.

93 78 3. Linear Regression F-statistic cannot be used, and neither can most of the other concepts that we have seen so far in this chapter. When p is large, some of the approaches , can be used. This discussed in the next section, such as forward selection high-dimensional setting is discussed in greater detail in Chapter 6. high- dimensional Two: Deciding on Important Variables As discussed in the previous section, the first step in a multiple regression analysis is to compute the F-statistic and to examine the associated p- value. If we conclude on the basis of that p-value that at least one of the are predictors is related to the response, then it is natural to wonder which the guilty ones! We could look at the individual p-values as in Table 3.4, p but as discussed, if is large we are likely to make some false discoveries. s are associated with the response, It is possible that all of the predictor but it is more often the case that the response is only related to a subset of the predictors. The task of determining which predictors are associated with the response, in order to fit a single model involving only those predictors, is referred to as variable selection . The variable selection problem is studied variable extensively in Chapter 6, and so here we will provide only a brief outline selection of some classical approaches. Ideally, we would like to perform variable selection by trying out a lot of different models, each containing a different subset of the predictors. For = 2, then we can consider four models: (1) a model contain- instance, if p only, (3) a model containing ing no variables, (2) a model containing X 1 X only, and (4) a model containing both X and .Wecanthense- X 2 1 2 best model out of all of the models that we have considered. How lect the do we determine which model is best? Various statistics can be used to , Akaike informa- Mallow’s C judge the quality of a model. These include p Mallow’s C p tion criterion Bayesian information criterion (BIC), and adjusted (AIC), Akaike 2 l in Chapter 6. We can also deter- R . These are discussed in more detai information criterion mine which model is best by plotting various model outputs, such as the Bayesian residuals, in order to search for patterns. information p criterion models that contain subsets of p Unfortunately, there are a total of 2 2 R adjusted variables. This means that even for moderate p , trying out every possible subset of the predictors is infeasible. For instance, we saw that if p =2,then 2 = 4 models to consider. But if = 30, then we must consider p there are 2 30 073 , is very , =1 , 824 models! This is not practical. Therefore, unless p 741 2 p small, we cannot consider all 2 models, and instead we need an automated and efficient approach to choose a smaller set of models to consider. There are three classical approaches for this task: • Forward selection . We begin with the null model —a model that con- forward tains an intercept but no predictors. We then fit p simple linear re- selection null model gressions and add to the null model the variable that results in the lowest RSS. We then add to that model the variable that results

94 3.2 Multiple Linear Regression 79 in the lowest RSS for the new two-variable model. This approach is continued until some stopping rule is satisfied. . We start with all variables in the model, and • Backward selection backward remove the variable with the largest p-value—that is, the variable selection p − 1)-variable that is the least statistically significant. The new ( model is fit, and the variable with the largest p-value is removed. This procedure continues until a stopping rule is reached. For instance, we may stop when all remaining variables have a p-value below some threshold. • Mixed selection . This is a combination of forward and backward se- mixed lection. We start with no variables in the model, and as with forward selection selection, we add the variable that provides the best fit. We con- tinue to add variables one-by-one. Of course, as we noted with the Advertising example, the p-values for variables can become larger as new predictors are added to the model. Hence, if at any point the p-value for one of the variables in the model rises above a certain threshold, then we remove that variable from the model. We con- tinue to perform these forward and backward steps until all variables in the model have a sufficiently low p-value, and all variables outside the model would have a large p-value if added to the model. p>n , while forward selection can Backward selection cannot be used if always be used. Forward selection is a greedy approach, and might include variables early that later become redundant. Mixed selection can remedy this. Three: Model Fit Two of the most common numerical measures of model fit are the RSE and 2 , the fraction of variance explained. These quantities are computed and R interpreted in the same fashion as for simple linear regression. 2 is the square of the correlation of the R Recall that in simple regression, response and the variable. In multiple linear regression, it turns out that it 2 ˆ Y, Y equals Cor( ) , the square of the correlation between the response and the fitted linear model; in fact one property of the fitted linear model is that it maximizes this correlation among all possible linear models. 2 value close to 1 indicates that the model explains a large portion R An of the variance in the response variable. As an example, we saw in Table 3.6 Advertising data, the model that uses all three advertising me- that for the 2 dia to predict has an R sales of 0 . 8972. On the other hand, the model that 2 has an radio to predict sales and R . value of 0 TV 89719. In other uses only 2 words, there is a small increase in R if we include newspaper advertising in the model that already contains TV and radio advertising, even though we saw earlier that the p-value for newspaper advertising in Table 3.4 is not

95 80 3. Linear Regression 2 R significant. It turns out that will always increase when more variables ariables are only weakly associated are added to the model, even if those v with the response. This is due to the fact that adding another variable to the least squares equations must allow us to fit the training data (though 2 statistic, not necessarily the testing data) more accurately. Thus, the R which is also computed on the training data, must increase. The fact that adding newspaper advertising to the model containing only TV and radio 2 provides additional evidence R advertising leads to just a tiny increase in newspaper can be dropped from the model. Essentially, newspaper pro- that vides no real improvement in the model fit to the training samples, and its inclusion will likely lead to poor results on independent test samples due to overfitting. 2 TV R of 0 . 61 as a predictor had an In contrast, the model containing only (Table 3.2). Adding radio to the model leads to a substantial improvement 2 . This implies that a model that uses TV and radio expenditures to R in predict sales is substantially better than one that uses only TV advertis- ing. We could further quantify this improvement by looking at the p-value radio coefficient in a model that contains only TV and radio as for the predictors. as predictors has an RSE TV and radio The model that contains only of 1.681, and the model that also contains as a predictor has newspaper an RSE of 1.686 (Table 3.6). In contrast, the model that contains only TV has an RSE of 3 . 26 (Table 3.2). This corroborates our previous conclusion that a model that uses TV and radio expenditures to predict sales is much more accurate (on the training data) than one that only uses TV spending. Furthermore, given that TV and radio expenditures are used as predictors, there is no point in also using newspaper spending as a predictor in the model. The observant reader may w onder how RSE can increase when newspaper is added to the model given that RSS must decrease. In general RSE is defined as √ 1 (3.25) , RSS RSE = p − n − 1 which simplifies to (3.15) for a simple linear regression. Thus, models with more variables can have higher RSE if the decrease in RSS is small relative p . to the increase in 2 In addition to looking at the RSE and R statistics just discussed, it can be useful to plot the data. Graphical s ummaries can reveal problems with a model that are not visible from numerical statistics. For example, Fig- .We TV and radio versus sales ure 3.5 displays a three-dimensional plot of see that some observations lie above and some observations lie below the least squares regression plane. Notice that there is a clear pattern of nega- tive residuals, followed by positive residuals, followed by negative residuals. sales for instances In particular, the linear model seems to overestimate in which most of the advertising money was spent exclusively on either

96 3.2 Multiple Linear Regression 81 Sales TV Radio For the Advertising data, a linear regression fit to sales using FIGURE 3.5. radio TV and as predictors. From the pattern of the residuals, we can see that there is a pronounced non-linear relationship in the data. or radio . It underestimates sales for instances where the budget was TV split between the two media. This pronounced non-linear pattern cannot be synergy or inter- modeled accurately using linear regression. It suggests a effect between the advertising media, whereby combining the media action together results in a bigger boost to sales than using any single medium. In Section 3.3.2, we will discuss extending the linear model to accommodate such synergistic effects through the use of interaction terms. Four: Predictions Once we have fit the multiple regression model, it is straightforward to on the basis of a set of apply (3.21) in order to predict the response Y ,X ,...,X . However, there are three sorts of X values for the predictors p 2 1 with this prediction. uncertainty associated ˆ ˆ ˆ . , ,β β β ,..., ,...,β β are estimates for β 1. The coefficient estimates p 1 1 p 0 0 least squares plane That is, the ˆ ˆ ˆ ˆ β = Y β X X + + β + ··· 0 p 1 p 1 is only an estimate for the true population regression plane f ( X )= β + + . X X + ··· β β p 1 p 1 0 The inaccuracy in the coefficient estimates is related to the reducible error confidence interval in order from Chapter 2. We can compute a ˆ ). to determine how close will be to f ( X Y

97 82 3. Linear Regression f ( ) is almost 2. Of course, in practice assuming a linear model for X always an approximation of reality, so there is an additional source of model bias .Sowhenweusea potentially reducible error which we call linear model, we are in fact estimating the best linear approximation to the true surface. However, here we will ignore this discrepancy, and operate as if the linear model were correct. 3. Even if we knew )—that is, even if we knew the true values ( f X ,β ,...,β —the response value cannot be predicted perfectly for β 0 p 1  in the model (3.21). In Chapter 2, we because of the random error referred to this as the irreducible error . How much will Y vary from ˆ ?Weuse Y to answer this question. Prediction prediction intervals intervals are always wider than confidence intervals, because they f ( ) (the reducible incorporate both the error in the estimate for X error) and the uncertainty as to how much an individual point will differ from the population regression plane (the irreducible error). We use a to quantify the uncertainty surrounding confidence interval confidence average sales over a large number of cities. For example, given that the interval $100 000 is spent on , TV , 000 is spent on radio advertising advertising and $20 in each city, the 95 % confidence interval is [10 985 , 11 , 528]. We interpret , this to mean that 95 % of intervals of this form will contain the true value of 8 On the other hand, a prediction interval canbeusedtoquantifythe f ( ). X prediction sales for a particular city. Given that $100 , 000 is uncertainty surrounding interval spent on advertising and $20 , 000 is spent on TV advertising in that city radio the 95 % prediction interval is [7 930 , 14 , 580]. We interpret this to mean , that 95 % of intervals of this form will contain the true value of Y for this city. Note that both intervals are centered at 11 , 256, but that the prediction interval is substantially wider than the confidence interval, reflecting the for a given city in comparison to the sales increased uncertainty about average sales over many locations. 3.3 Other Considerations in the Regression Model 3.3.1 Qualitative Predictors In our discussion so far, we have assumed that all variables in our linear regression model are . But in practice, this is not necessarily quantitative the case; often some predictors are qualitative . 8 In other words, if we collect a large number of data sets like the Advertising data on the basis of each sales set, and we construct a confidence interval for the average advertising), then 95 % of these radio 000 in TV and $20 , 000 in , data set (given $100 sales . confidence intervals will contain the true value of average

98 3.3 Other Considerations in the Regression Model 83 data set displayed in Figure 3.6 records balance For example, the Credit er of individuals) as well as several (average credit card debt for a numb quantitative predictors: , cards (number of credit cards), education age (in thousands of dollars), limit (credit limit), (years of education), income and (credit rating). Each panel of Figure 3.6 is a scatterplot for a rating pair of variables whose identities are given by the corresponding row and column labels. For example, the scatterplot directly to the right of the word balance versus age , while the plot directly to the right “Balance” depicts age versus cards . In addition to these quantitative of “Age” corresponds to variables, we also have four qualitative variables: student (student gender , status), (marital status), and (Caucasian, African Amer- status ethnicity ican or Asian). 20 15 5 8000 14000 20 40 60 80 100 2000 10 1500 Balance 0 500 Age 20 40 60 80 100 Cards 2468 Education 5101520 150 Income 50 100 Limit 8000 14000 2000 1000 600 Rating 200 600 50 100 150 0 500 2 1000 1500 4 6 8 200 FIGURE 3.6. The Credit data set contains information about balance , age , , cards education , income , limit ,and rating for a number of potential cus- tomers.

99 84 3. Linear Regression Coefficient Std. error t-statistic p-value 33.13 509.80 0 . 0001 15.389 Intercept < 0.429 0.6690 19.73 gender[Female] 46.05 TABLE 3.7. Least squares coefficient estimates associated with the regression of balance in the Credit data set. The linear model is given in (3.27). onto gender That is, gender is encoded as a dummy variable, as in (3.26). Predictors with Only Two Levels Suppose that we wish to investigate differences in credit card balance be- tween males and females, ignoring the other variables for the moment. If a qualitative predictor (also known as a factor levels , or possi- )onlyhastwo factor ble values, then incorporating it into a regression model is very simple. We level simply create an indicator or dummy variable that takes on two possible dummy numerical values. For example, based on the variable, we can create gender variable a new variable that takes the form { 1if i th person is female x = (3.26) i th person is male , 0if i and use this variable as a predictor in the regression equation. This results in the model { if i th person is female + β +  β 0 i 1 y = + x β + β =  (3.27) i i i 0 1 + .  if i th person is male β 0 i can be interpreted as the average credit card bala nce among males, Now β 0 β as the + β as the average credit card balance among females, and β 1 1 0 average difference in credit card balance between females and males. Table 3.7 displays the coefficient estimates and other information asso- ciated with the model (3.27). The av erage credit card debt for males is estimated to be $509 . 80, whereas females ar e estimated to carry $19 . 73 in additional debt for a total of $509 . . 73 = $529 . 53. However, we 80 + $19 notice that the p-value for the dummy variable is very high. This indicates of a difference in average credit card that there is no statistical evidence balance between the genders. The decision to code females as 1 and males as 0 in (3.27) is arbitrary, and has no effect on the regression fit, but does alter the interpretation of the coefficients. If we had coded males as 1 and females as 0, then the estimates 73, respectively, leading once . 19 and β − would have been 529 . 53 and β for 1 0 again to a prediction of credit card debt of $529 . 53 − $19 . 73 = $509 . 80 for males and a prediction of $529 . 53 for females. Alternatively, instead of a 0 / 1 coding scheme, we could create a dummy variable

100 3.3 Other Considerations in the Regression Model 85 { 1if i th person is female = x i i th person is male 1if − and use this variable in the regression equation. This results in the model { th person is female  if i + β + β 1 i 0 y = β x +  + = β i 1 0 i i th person is male. − β i +  if β i 1 0 β Now average credit card balance (ig- can be interpreted as the overall 0 noring the gender effect), and β is the amount that females are above the 1 average and males are below the average. In this example, the estimate for would be $519 . 665, halfway between the male and female averages of β 0 $509 . 80 and $529 . 53. The estimate for β would be $9 . 865, which is half of 1 $19 73, the average difference between females and males. It is important to . note that the final predictions for the credit balances of males and females will be identical regardless of the coding scheme used. The only difference is in the way that the coefficients are interpreted. Qualitative Predictors with More than Two Levels When a qualitative predictor has more than two levels, a single dummy variable cannot represent all possible values. In this situation, we can create ethnicity variable we additional dummy variables. For example, for the create two dummy variables. The first could be { 1if i th person is Asian = x (3.28) 1 i i th person is not Asian , 0if and the second could be { 1if i th person is Caucasian = (3.29) x 2 i th person is not Caucasian . 0if i sed in the regression equation, in Then both of these variables can be u order to obtain the model ⎧ ⎪ if  + β + th person is Asian i β 0 i 1 ⎨ β x  β + = + + β x = y + th person is Caucasian i β if +  β 2 i 1 2 i 0 i i 1 2 i 0 ⎪ ⎩ . +  th person is African American if i β i 0 3.30) ( credit card balance for African can be interpreted as the average Now β 0 β Americans, can be interpreted as the differ ence in the average balance 1 can be inter- β between the Asian and African American categories, and 2 preted as the difference in the average balance between the Caucasian and

101 86 3. Linear Regression Coefficient Std. error t-statistic p-value Intercept 11.464 < 0 . 0001 531.00 46.32 18.69 0.287 − − 0.7740 ethnicity[Asian] 65.02 12.50 − − 0.221 0.8260 ethnicity[Caucasian] 56.68 TABLE 3.8. Least squares coefficient estimates associated with the regression balance onto ethnicity of Credit data set. The linear model is given in in the ). That is, ethnicity is encoded via two dummy variables (3.28) and (3.29). 3.30 ( e will always be one fewer dummy vari- African American categories. Ther able than the number of levels. The level with no dummy variable—African American in this example—is known as the baseline . baseline balance for the baseline, From Table 3.8, we see that the estimated African American, is $531 . 00. It is estimated that the Asian category will . 69 less debt than the African American category, and that the have $18 . 50 less debt than the African American Caucasian category will have $12 category. However, the p-values associated with the coefficient estimates for the two dummy variables are very large, suggesting no statistical evidence of a real difference in credit card bala nce between the ethnicities. Once again, the level selected as the baseline category is arbitrary, and the final predictions for each group will be the same regardless of this choice. How- ever, the coefficients and their p-values do depend on the choice of dummy variable coding. Rather than rely on the individual coefficients, we can use = : β = 0; this does not depend on the coding. β H an F-test to test 2 1 0 . This F-test has a p-value of 0 96, indicating that we cannot reject the null and ethnicity . balance hypothesis that there is no relationship between Using this dummy variable approach presents no difficulties when in- corporating both quantitative and qualitative predictors. For example, to balance on both a quantitative variable such as income and a qual- regress itative variable such as student , we must simply create a dummy variable for student income and the and then fit a multiple regression model using dummy variable as predictors for credit card balance. There are many different ways of coding qualitative variables besides the dummy variable approach taken here. All of these approaches lead to equivalent model fits, but the coefficients are different and have different interpretations, and are designed to measure particular contrasts .Thistopic contrast is beyond the scope of the book, and so we will not pursue it further. 3.3.2 Extensions of the Linear Model The standard linear regression model (3.19) provides interpretable results and works quite well on many real-worl d problems. However, it makes sev- eral highly restrictive assumptions that are often violated in practice. Two of the most important assumptions state that the relationship between the predictors and response are additive and linear . The additive assumption additive linear

102 3.3 Other Considerations in the Regression Model 87 X Y is means that the effect of changes in a predictor on the response j independent of the values of the other predictors. The linear assumption Y due to a one-unit change in states that the change in the response X is j . In this book, we examine a number X constant, regardless of the value of j of sophisticated methods that relax these two assumptions. Here, we briefly examine some common classical approaches for extending the linear model. Removing the Additive Assumption In our previous analysis of the TV Advertising data, we concluded that both radio seem to be associated with . The linear models that formed and sales the basis for this conclusion assumed that the effect on of increasing sales one advertising medium is independent of the amount spent on the other media. For example, the linear model (3.20) states that the average effect of a one-unit increase in sales is always β , regardless of the amount TV on 1 radio . spent on However, this simple model may be incorrect. Suppose that spending increases the effectiveness of TV ad- money on radio advertising actually should increase as radio increases. TV vertising, so that the slope term for 000, spending half on , In this situation, given a fixed budget of $100 radio and half on TV may increase sales more than allocating the entire amount TV or to radio effect, synergy . In marketing, this is known as a to either interaction andinstatisticsitisreferredtoasan effect. Figure 3.5 sug- gests that such an effect may be present in the advertising data. Notice TV or radio are low, then the true are lower sales that when levels of either than predicted by the linear model. But when advertising is split between sales . the two media, then the model tends to underestimate Consider the standard linear regression model with two variables, . + X β + β + X β = Y 1 1 2 0 2 Y will increase by one unit, then According to this model, if we increase X 1 does not alter units. Notice that the presence of X β by an average of 2 1 this statement—that is, regardless of the value of X , a one-unit increase 2 in X will lead to a β -unit increase in Y . One way of extending this model 1 1 to allow for interaction effects is to include a third predictor, called an interaction term X , which is constructed by computing the product of 1 and X . This results in the model 2 + X + (3.31) X + β . X + β X β β = Y 3 2 1 1 2 2 0 1 How does inclusion of this interaction term relax the additive assumption? Notice that (3.31) can be rewritten as Y = β (3.32) +( β  + β + X X ) X β + 1 2 2 3 1 0 2 ̃ β = +  β + X X + β 2 0 2 1 1

103 88 3. Linear Regression Coefficient Std. error t-statistic p-value 0.248 < 0 . 0001 6.7502 27.23 Intercept 12.70 < TV . 0001 0.0191 0.002 0 0.009 3.24 0.0014 0.0289 radio radio 0.0011 0.000 20.73 < 0 . 0001 TV × Advertising data, least squares coefficient estimates asso- TABLE 3.9. For the onto TV and radio , with an interaction term, ciated with the regression of sales as in (3.33). ̃ ̃ where = β is + β Y X on .Since β β X changes with X , the effect of 2 1 2 3 1 1 1 no longer constant: adjusting X will change the impact of on Y . X 2 1 For example, suppose that we are interested in studying the productiv- produced on the units ity of a factory. We wish to predict the number of basis of the number of production and the total number of workers lines . It seems likely that the effect of increasing the number of production lines will depend on the number of workers, since if no workers are available to operate the lines, then increasing the number of lines will not increase production. This suggests that it would be appropriate to include an inter- and lines in a linear model to predict units . workers action term between Suppose that when we fit the model, we obtain × ≈ . 2+3 . 4 × lines +0 . 22 × workers +1 . 4 1 ( lines × workers ) units workers ) × lines +0 . 22 × workers . 2+(3 . 4+1 . 4 × =1 . In other words, adding an additional line will increase the number of units produced by 3 4+1 . 4 × . . Hence the more workers workers we have, the . lines stronger will be the effect of We now return to the Advertising example. A linear model that uses radio , and an interaction between the two to predict sales takes the , TV form = β  sales β )+ × TV + β TV × radio + β × × ( radio + 3 2 1 0 (3.33) . + +( β radio + β × × radio ) × TV + β β = 2 3 0 1 β We can interpret as the increase in the effectiveness of TV advertising 3 for a one unit increase in radio advertising (or vice-versa). The coefficients that result from fitting the model (3.33) are given in Table 3.9. The results in Table 3.9 strongly suggest that the model that includes the main effects . interaction term is superior to the model that contains only main effect The p-value for the interaction term, TV × radio , is extremely low, indicating that there is strong evidence for H = 0. In other words, it is clear that : β  3 a 2 R the true relationship is not additive. The for the model (3.33) is 96.8 %, sales using TV and compared to only 89.7 % for the model that predicts radio without an interaction term. This means that (96 . 8 − 89 . 7) / (100 − sales that remains after fitting the ad- 89 . 7) = 69 % of the variability in ditive model has been explained by the interaction term. The coefficient

104 3.3 Other Considerations in the Regression Model 89 , 000 is estimates in Table 3.9 suggest that an increase in TV advertising of $1 ˆ ˆ β associated with increased sales of ( 000 = 19+1 × radio ) × 1 , + . 1 × radio β 1 3 , 000 will be associated with units. And an increase in radio advertising of $1 ˆ ˆ , + β units. × TV ) × 1 TV 000 = 29 + 1 . 1 × β an increase in sales of ( 2 3 In this example, the p-values associated with , TV ,andtheinterac- radio tion term all are statistically significant (Table 3.9), and so it is obvious that all three variables should be included in the model. However, it is sometimes the case that an interact ion term has a very small p-value, but and )donot.The hier- radio TV the associated main effects (in this case, if we include an interaction in a model, we archical principle states that hierarchical should also include the main effects, even if the p-values associated with principle their coefficients are not significant. In other words, if the interaction be- X seems important, then we should include both and X and tween X 1 2 1 X in the model even if their coefficient estimates have large p-values. The 2 X × is related to the response, rationale for this principle is that if X 1 2 X then whether or not the coefficients of or X are exactly zero is of lit- 2 1 ,andso X and × X X is typically correlated with tle interest. Also X 2 1 1 2 leaving them out tends to alter the meaning of the interaction. In the previous example, we considered an interaction between and TV , both of which are quantitative var iables. However, the concept of radio interactions applies just as well to qualitative variables, or to a combination of quantitative and qualitative variables. In fact, an interaction between a qualitative variable and a quantitative variable has a particularly nice Credit data set from Section 3.3.1, and suppose interpretation. Consider the balance using the income (quantitative) and student that we wish to predict ce of an interaction term, the model (qualitative) variables. In the absen takes the form { th person is a student i if β 2 balance × income β + ≈ + β i 0 1 i 0if i th person is not a student { β th person is a student + i β if 0 2 + income × = β i 1 β if th person is not a student . i 0 (3.34) Notice that this amounts to fitting two parallel lines to the data, one for students and one for non-students. The lines for students and non-students .This β , but the same slope, + β β versus β have different intercepts, 2 1 0 0 is illustrated in the left-hand panel of Figure 3.7. The fact that the lines balance of a one-unit increase are parallel means that the average effect on in income does not depend on whether or not the individual is a student. This represents a potentially serious limitation of the model, since in fact a income may have a very different effect on the credit card balance change in of a student versus a non-student. This limitation can be addressed by adding an interaction variable, cre- income with the dummy variable for student .Our ated by multiplying

105 90 3. Linear Regression student 1400 1400 non−student 1000 1000 Balance Balance 600 600 200 200 100 0 50 0 50 150 150 100 Income Income Credit data, the least squares lines are shown for pre- FIGURE 3.7. For the balance income for students and non-students. Left: The model diction of from income and (3.34) was fit. There is no interaction between . Right: The student model (3.35) was fit. There is an interaction term between and student . income model now becomes { β × β if student + income 2 i 3 + balance + β ≈ × income β i 1 0 i 0 if not student { β income × ) + if student β )+( β + β ( 1 2 i 3 0 = β × income if not student + β 0 1 i (3.35) Once again, we have two different regression lines for the students and the non-students. But now those regression lines have different intercepts, + β versus β , as well as different slopes, . This allows for + β versus β β β 0 3 2 1 1 0 the possibility that changes in income may affect the credit card balances of students and non-students differently. The right-hand panel of Figure 3.7 income balance for students and shows the estimated relationships between and non-students in the model (3.35). We note that the slope for students is lower than the slope for non-students. This suggests that increases in income are associated with smaller inc reases in credit card balance among students as compared to non-students. Non-linear Relationships As discussed previously, the linear r egression model (3.19) assumes a linear relationship between the response and predictors. But in some cases, the true relationship between the response and the predictors may be non- linear. Here we present a very simple wa y to directly extend the linear model to accommodate non-linear relationships, using polynomial regression .In polynomial later chapters, we will present more complex approaches for performing regression non-linear fits in more general settings. mpg (gas mileage in miles per gallon) Consider Figure 3.8, in which the versus horsepower is shown for a number of cars in the Auto data set. The

106 3.3 Other Considerations in the Regression Model 91 50 Linear Degree 2 Degree 5 40 30 Miles per gallon 20 10 50 150 100 200 Horsepower are data set. For a number of cars, mpg and horsepower Auto The FIGURE 3.8. shown. The linear regression fit is shown in orange. The linear regression fit for a 2 is shown as a blue curve. The linear regression horsepower model that includes horsepower fit for a model that includes all polynomials of up to fifth-degree is shown in green. orange line represents the linear regression fit. There is a pronounced rela- tionship between mpg horsepower , but it seems clear that this relation- and ship is in fact non-linear: the data suggest a curved relationship. A simple approach for incorporating non-linear associations in a linear model is to edictors in the model. For example, include transformed versions of the pr the points in Figure 3.8 seem to have a quadratic shape, suggesting that a quadratic model of the form 2 = β (3.36) + β  × horsepower + β + × horsepower mpg 2 1 0 mpg using a may provide a better fit. Equation 3.36 involves predicting non-linear function of horsepower . But it is still a linear model! That is, = horsepower X (3.36) is simply a multiple linear regression model with 1 2 X = and . So we can use standard linear regression software to horsepower 2 estimate β in order to produce a non-linear fit. The blue curve ,β β ,and 1 2 0 in Figure 3.8 shows the resulting quadratic fit to the data. The quadratic fit appears to be substantially better than the fit obtained when just the 2 of the quadratic fit is 0 688, compared to . linear term is included. The R 0 . 606 for the linear fit, and the p-value in Table 3.10 for the quadratic term is highly significant. 2 ledtosuchabigimprovementinthemodel,why horsepower If including 3 4 5 not include horsepower ? The green curve ,oreven horsepower horsepower ,

107 92 3. Linear Regression Coefficient Std. error t-statistic p-value Intercept 31.6 < 0 . 0001 56.9001 1.8004 0 0.4662 − 15.0 < − . 0001 horsepower 0.0311 2 0.0012 0.0001 horsepower 10.1 0 . 0001 < TABLE 3.10. Auto data set, least squares coefficient estimates associated For the 2 . mpg and with the regression of horsepower horsepower onto in Figure 3.8 displays the fit that results from including all polynomials up to fifth degree in the model (3.36). T he resulting fit seems unnecessarily wiggly—that is, it is unclear that including the additional terms really has led to a better fit to the data. The approach that we have just described for extending the linear model to accommodate non-linear relationships is known as polynomial regres- sion , since we have included polynomial functions of the predictors in the regression model. We further explore this approach and other non-linear extensions of the linear model in Chapter 7. 3.3.3 Potential Problems When we fit a linear regression model to a particular data set, many prob- lems may occur. Most common among these are the following: 1. Non-linearity of the response-predictor relationships. Correlation of error terms. 2. 3. Non-constant variance of error terms. 4. Outliers. 5. High-leverage points. 6. Collinearity. In practice, identifying and overco ming these problems is as much an art as a science. Many pages in countless books have been written on this topic. Since the linear regression model is not our primary focus here, we will provide only a brief summary of some key points. 1. Non-linearity of the Data The linear regression model assumes that there is a straight-line relation- ship between the predictors and the response. If the true relationship is far from linear, then virtually all of the conclusions that we draw from the fit are suspect. In addition, the pred iction accuracy of the model can be significantly reduced. are a useful graphical tool for identifying non-linearity. Residual plots residual plot Given a simple linear regression model, we can plot the residuals, e = i y − ˆ y , versus the predictor x . In the case of a multiple regression model, i i i

108 3.3 Other Considerations in the Regression Model 93 Residual Plot for Linear Fit Residual Plot for Quadratic Fit 20 323 334 323 15 330 15 334 10 10 5 5 0 0 Residuals Residuals −5 −5 155 −15 −10 −15 −10 35 5 1015202530 15 20 25 30 Fitted values Fitted values Plots of residuals versus predicted (or fitted) values for the Auto FIGURE 3.9. data set. In each plot, the red line is a smooth fit to the residuals, intended to make it easier to identify a trend. A linear regression of mpg on horsepower .A Left: Right: Alinear strong pattern in the residuals indicates non-linearity in the data. 2 . There is little pattern in the mpg on horsepower and horsepower regression of residuals. since there are multiple predictors, we instead plot the residuals versus the predicted (or fitted )valuesˆ y . Ideally, the residual plot will show no i fitted discernible pattern. The presence of a pattern may indicate a problem with some aspect of the linear model. The left panel of Figure 3.9 displays a residual plot from the linear mpg onto on the Auto data set that was illustrated horsepower regression of in Figure 3.8. The red line is a smooth fit to the residuals, which is displayed in order to make it easier to identify any trends. The residuals exhibit a clear U-shape, which provides a strong indication of non-linearity in the data. In contrast, the right-hand panel of Figure 3.9 displays the residual plot that results from the model (3.36), which contains a quadratic term. There appears to be little pattern in the residuals, suggesting that the quadratic term improves the fit to the data. If the residual plot indicates that there are non-linear associations in the data, then a simple approach is to use non-linear transformations of the √ 2 X , in the regression model. In the X ,and , predictors, such as log X later chapters of this book, we will discuss other more advanced non-linear approaches for addressing this issue. 2. Correlation of Error Terms An important assumption of the linear regression model is that the error terms,  , are uncorrelated. What does this mean? For instance, , ,..., 2 1 n if the errors are uncorrelated, then the fact that  is positive provides i . The standard errors that little or no information about the sign of  i +1 are computed for the estimated regression coefficients or the fitted values

109 94 3. Linear Regression are based on the assumption of uncorrelated error terms. If in fact there is correlation among the error terms, then the estimated standard errors will tend to underestimate the true standard errors. As a result, confi- dence and prediction intervals will be narrower than they should be. For example, a 95 % confidence interval may in reality have a much lower prob- ability than 0 95 of containing the true value of the parameter. In addition, . p-values associated with the model will be lower than they should be; this could cause us to erroneously conclude that a parameter is statistically significant. In short, if the error terms are correlated, we may have an unwarranted sense of confidence in our model. As an extreme example, suppose we accidentally doubled our data, lead- ing to observations and error terms iden tical in pairs. If we ignored this, our n ,when standard error calculations would be as if we had a sample of size 2 n in fact we have only samples. Our estimated parameters would be the n n samples, but the confidence intervals samplesasforthe same for the 2 √ 2! would be narrower by a factor of Why might correlations among the error terms occur? Such correlations frequently occur in the context of time series data, which consists of ob- time series servations for which measurements are obtained at discrete points in time. In many cases, observations that are obtained at adjacent time points will have positively correlated errors. In order to determine if this is the case for a given data set, we can plot the residuals from our model as a function of time. If the errors are uncorrelated, then there should be no discernible pat- tern. On the other hand, if the error terms are positively correlated, then we may see tracking in the residuals—that is, adjacent residuals may have tracking similar values. Figure 3.10 provides an illustration. In the top panel, we see the residuals from a linear regression fit to data generated with uncorre- lated errors. There is no evidence of a time-related trend in the residuals. In contrast, the residuals in the bottom panel are from a data set in which adjacent errors had a correlation of 0 . 9. Now there is a clear pattern in the residuals—adjacent residuals tend to take on similar values. Finally, the center panel illustrates a more moderate case in which the residuals had a correlation of 0 . 5. There is still evidence of tracking, but the pattern is less clear. Many methods have been developed to properly take account of corre- lations in the error terms in time series data. Correlation among the error terms can also occur outside of time se ries data. For instance, consider a study in which individuals’ heights are predicted from their weights. The assumption of uncorrelated errors could be violated if some of the individ- uals in the study are members of the same family, or eat the same diet, or have been exposed to the same envi ronmental factors. In general, the assumption of uncorrelated errors is ex tremely important for linear regres- sion as well as for other statistical methods, and good experimental design is crucial in order to mitigate the risk of such correlations.

110 3.3 Other Considerations in the Regression Model 95 =0.0 ρ −1 0 1 2 3 Residual −3 0 40 80 60 100 20 =0.5 ρ 0 1 2 Residual −2 −4 0 20 40 80 100 60 ρ =0.9 1.5 0.5 Residual −1.5 −0.5 40 80 0 20 100 60 Observation FIGURE 3.10. Plots of residuals from simulated time series data sets generated with differing levels of correlation ρ between error terms for adjacent time points. 3. Non-constant Variance of Error Terms Another important assumption of the linear regression model is that the 2  error terms have a constant variance, Var( )= σ . The standard errors, i confidence intervals, and hypothesis tests associated with the linear model rely upon this assumption. Unfortunately, it is often the case that the variances of the error terms are non-constant. For instance, the varia nces of the error terms may increase with the value of the response. One can identify non-constant variances in the errors, or heteroscedasticity , from the presence of a funnel shape in heterosceda- the residual plot. An example is shown in the left-hand panel of Figure 3.11, sticity in which the magnitude of the residuals tends to increase with the fitted values. When faced with this problem, one possible solution is to trans- √ Y . Such Y or form the response Y using a concave function such as log a transformation results in a greater amount of shrinkage of the larger re- sponses, leading to a reduction in heteroscedasticity. The right-hand panel of Figure 3.11 displays the residual plot after transforming the response

111 96 3. Linear Regression Response log(Y) Response Y 15 998 0.4 975 845 10 0.2 5 0.0 0 Residuals Residuals −5 605 671 −10 437 −0.8 −0.6 −0.4 −0.2 3.0 3.4 25 3.2 20 15 2.4 30 10 2.8 2.6 Fitted values Fitted values Residual plots. In each plot, the red line is a smooth fit to the FIGURE 3.11. residuals, intended to make it easier to identify a trend. The blue lines track the outer quantiles of the residuals, and emphasize patterns. The funnel shape Left: Right: The predictor has been log-transformed, and indicates heteroscedasticity. there is now no evidence of heteroscedasticity. using log Y . The residuals now appear to have constant variance, though there is some evidence of a slight non-linear relationship in the data. Sometimes we have a good idea of the variance of each response. For raw observations. If i example, the th response could be an average of n i 2 , then their each of these raw observations is uncorrelated with variance σ 2 2 σ average has variance σ /n . In this case a simple remedy is to fit our = i i weighted least squares , with weights proportional to the inverse model by weighted variances—i.e. w = n in this case. Most linear regression software allows least squares i i for observation weights. 4. Outliers is far from the value predicted by the outlier y An is a point for which i outlier f reasons, such as inc orrect recording model. Outliers can arise for a variety o of an observation during data collection. The red point (observation 20) in the left-hand panel of Figure 3.12 illustrates a typical outlier. The red solid line is the least squares regression fit, while the blue dashed line is the least squares fit after removal of the outlier. In this case, removing the outlier has little effect on the least squares line: it leads to almost no change in the slope, and a miniscule reduction in the intercept. It is typical for an outlier that does not have an unusual predictor value to have little effect on the least squares fit. However, even if an outlier does not have much effect on the least squares fit, it can cause other problems. For instance, in this example, the RSE is 1 . 09 when the outlier is included in the regression, but it is only 0 . 77 when the outlier is removed. Since the RSE is used to compute all confidence intervals and

112 3.3 Other Considerations in the Regression Model 97 20 20 20 6 4 2 Y Residuals 0 0246 Studentized Residuals −2 −101234 −4 2 −1 −20246 −20246 −2 1 0 X Fitted Values Fitted Values Left: The least squares regression line is shown in red, and the FIGURE 3.12. Center: regression line after removing the outlier is shown in blue. The residual plot clearly identifies the outlier. Right: The outlier has a studentized residual of ; typically we expect values between − 3 6 3 . and p-values, such a dramatic increase ca used by a single data point can have implications for the interpretation of the fit. Similarly, inclusion of the 2 outlier causes the R to decline from 0 . 892 to 0 . 805. Residual plots can be used to identify outliers. In this example, the out- lier is clearly visible in the residual plot illustrated in the center panel of ifficult to decide how large a resid- Figure 3.12. But in practice, it can be d ual needs to be before we consider the point to be an outlier. To address this problem, instead of plotting the residuals, we can plot the studentized by its estimated standard residuals , computed by dividing each residual e i studentized error. Observations whose studentized residuals are greater than 3 in abso- residual lute value are possible outliers. In the right-hand panel of Figure 3.12, the outlier’s studentized residual exceed s 6, while all other observations have studentized residuals between − 2and2. If we believe that an outlier has occurred due to an error in data collec- tion or recording, then one solution is to simply remove the observation. However, care should be taken, since an outlier may instead indicate a deficiency with the model, such as a missing predictor. 5. High Leverage Points is y We just saw that outliers are observations for which the response i . In contrast, observations with high leverage unusual given the predictor x i high leverage have an unusual value for x . For example, observation 41 in the left-hand i panel of Figure 3.13 has high leverage, in that the predictor value for this observation is large relative to the other observations. (Note that the data displayed in Figure 3.13 are the same as the data displayed in Figure 3.12, but with the addition of a single high leverage observation.) The red solid line is the least squares fit to the data, while the blue dashed line is the fit produced when observation 41 is removed. Comparing the left-hand panels of Figures 3.12 and 3.13, we observe that removing the high leverage observation has a much more substantial impact on the least squares line

113 98 3. Linear Regression 20 41 2 1 41 20 2 0 Y X −1 0510 Studentized Residuals −1012345 −2 −2 1 0 −1 0.00 0.05 0.10 0.15 0.20 0.25 2 −2−101234 X X Leverage 1 Observation 41 is a high leverage point, while 20 is not. FIGURE 3.13. Left: The red line is the fit to all the data, and the blue line is the fit with observation value X 41 removed. Center: The red observation is not unusual in terms of its 1 value, but still falls outside the bulk of the data, and hence has high or its X 2 leverage. 41 has a high leverage and a high residual. Right: Observation than removing the outlier. In fact, high leverage observations tend to have ession line. It is cause for concern if a sizable impact on the estimated regr ed by just a couple of observations, the least squares line is heavily affect because any problems with these points may invalidate the entire fit. For this reason, it is important to identify high leverage observations. In a simple linear regression, high leverage observations are fairly easy to identify, since we can simply look for observations for which the predictor value is outside of the normal range of the observations. But in a multiple linear regression with many predictors , it is possible to have an observation that is well within the range of each individual predictor’s values, but that is unusual in terms of the full set of predictors. An example is shown in and the center panel of Figure 3.13, for a data set with two predictors, X 1 X . Most of the observations’ predictor values fall within the blue dashed 2 ellipse, but the red observation is well outside of this range. But neither its X nor its value for X or is unusual. So if we examine just X value for 2 1 1 just X , we will fail to notice this high leverage point. This problem is more 2 pronounced in multiple regression settings with more than two predictors, because then there is no simple way to plot all dimensions of the data simultaneously. In order to quantify an observation’s leverage, we compute the leverage statistic . A large value of this statistic indicates an observation with high leverage leverage. For a simple linear regression, statistic 2 x ( ) x ̄ − 1 i ∑ + = (3.37) . h i n 2 ′ n x ) ( − ̄ x ′ i =1 i It is clear from this equation that h increases with the distance of x . from ̄ x i i to the case of multiple predictors, though h There is a simple extension of i h we do not provide the formula here. The leverage statistic is always i between 1 /n and 1, and the average leverage for all the observations is always equal to ( p +1) /n . So if a given observation has a leverage statistic

114 3.3 Other Considerations in the Regression Model 99 80 800 70 60 600 Age Rating 50 400 40 30 200 12000 2000 4000 6000 8000 12000 2000 4000 6000 8000 Limit Limit Scatterplots of the observations from the Credit data set. Left: FIGURE 3.14. Aplotof versus limit . These two variables are not collinear. Right: Aplot age of versus limit . There is high collinearity. rating p , then we may suspect that the corresponding /n that greatly exceeds ( +1) point has high leverage. The right-hand panel of Figure 3.13 provides a plot of the studentized for the data in the left-hand panel of Figure 3.13. Ob- h residuals versus i servation 41 stands out as having a very high leverage statistic as well as a high studentized residual. In other words, it is an outlier as well as a high leverage observation. This is a particularly dangerous combination! This plot also reveals the reason that observation 20 had relatively little effect on the least squares fit in Figure 3.12: it has low leverage. 6. Collinearity refers to the situation in which two or more predictor variables Collinearity collinearity are closely related to one another. The concept of collinearity is illustrated Credit data set. In the left-hand panel of Fig- in Figure 3.14 using the ure 3.14, the two predictors and limit appear to have no obvious rela- age tionship. In contrast, in the right-hand panel of Figure 3.14, the predictors limit and rating are very highly correlated with each other, and we say that they are collinear . The presence of collinearity can pose problems in the regression context, since it can be difficult to separate out the indi- vidual effects of collinear variables on the response. In other words, since limit rating tend to increase or decrease t ogether, it can be difficult to and balance . y is associated with the response, determine how each one separatel Figure 3.15 illustrates some of the difficulties that can result from collinear- ity. The left-hand panel of Figure 3.15 is a contour plot of the RSS (3.22) associated with different possible coe fficient estimates for the regression . Each ellipse represents a set of coefficients balance on limit and age of that correspond to the same RSS, with ellipses nearest to the center tak- ing on the lowest values of RSS. The black dots and associated dashed

115 100 3. Linear Regression 21.8 21.8 0 21.5 −1 21.25 21.5 −2 Age Rating β β −3 4 − 012345 −5 0.2 0.19 0.17 −0.1 0.16 0.1 0.18 0.0 β β Limit Limit FIGURE 3.15. Contour plots for the RSS values as a function of the parameters β Credit data set. In each plot, the black for various regressions involving the dots represent the coefficient values corresponding to the minimum RSS. Left: balance age and limit .The A contour plot of RSS for the regression of onto minimum value is well defined. Right: A contour plot of RSS for the regression balance onto of and limit . Because of the collinearity, there are many rating ,β ) with a similar value for RSS. pairs ( β Limit Rating lines represent the coefficient estimat es that result in the smallest possible RSS—in other words, these are the least squares estimates. The axes for limit have been scaled so that the plot includes possible coeffi- and age cient estimates that are up to four standard errors on either side of the least squares estimates. Thus the plot includes all plausible values for the coefficient is almost limit coefficients. For example, we see that the true 15 and 0 . 20. certainly somewhere between 0 . In contrast, the right-hand panel of Figure 3.15 displays contour plots of the RSS associated with possible coefficient estimates for the regression balance onto limit and rating , which we know to be highly collinear. of Now the contours run along a narrow valley; there is a broad range of values for the coefficient estimates that result in equal values for RSS. Hence a small change in the data could cause the pair of coefficient values that yield the smallest RSS—that is, the least squares estimates—to move anywhere along this valley. This results in a great deal of uncertainty in the limit coefficient now runs coefficient estimates. Not ice that the scale for the from roughly − . 2to0 . 2; this is an eight-fold increase over the plausible 0 limit coefficient in the regression with age . Interestingly, even range of the though the limit and rating coefficients now have much more individual uncertainty, they will almost certainly lie somewhere in this contour valley. limit and rating For example, we would not expect the true value of the coefficients to be − 0 . 1 and 1 respectively, even though such a value is plausible for each coefficient individually.

116 3.3 Other Considerations in the Regression Model 101 Coefficient Std. error t-statistic p-value Intercept 43.828 − 3.957 < 0 . 0001 − 173.411 Model 1 − 0.672 − 3.407 0 . 0007 age 2.292 0.005 34.496 < 0 limit 0001 0.173 . − 45.254 − 8.343 < 377.537 . 0001 Intercept 0 Model 2 2.202 0.952 rating 0.0213 2.312 limit 0.064 0.384 0.7012 0.025 The results for two multiple regression models involving the TABLE 3.11. data set are shown. Model 1 is a regression of balance on age and limit , Credit balance on rating and limit . The standard error and Model 2 a regression of ˆ increases 12-fold in the second regression, due to collinearity. β of limit Since collinearity reduces the accuracy of the estimates of the regression ˆ β coefficients, it causes the standard error for to grow. Recall that the j ˆ -statistic for each predictor is calculated by dividing β by its standard t j error. Consequently, collinearity results in a decline in the t -statistic. As a result, in the presence of collinearity, we may fail to reject H : β =0.This j 0 means that the of the hypothesis test—the probability of correctly power power coefficient—is reduced by collinearity. detecting a non-zero Table 3.11 compares the coefficient est imates obtained from two separate balance on age and multiple regression models. The first is a regression of limit balance on rating and limit .Inthe , and the second is a regression of first regression, both and limit are highly significant with very small p- age and rating has caused limit values. In the second, the collinearity between the standard error for the limit coefficient estimate t o increase by a factor of 12 and the p-value to increase to 0 . 701. In other words, the importance limit variable has been masked due to the presence of collinearity. of the To avoid such a situation, it is desirable to identify and address potential collinearity problems while fitting the model. A simple way to detect collinearity is to look at the correlation matrix of the predictors. An element of this matrix that is large in absolute value indicates a pair of highly correlated variables, and therefore a collinearity problem in the data. Unfortunately, not all collinearity problems can be detected by inspection of the correlation matrix: it is possible for collinear- ity to exist between three or more variables even if no pair of variables has a particularly high correlation. We call this situation multicollinearity . multi- Instead of inspecting the correlation matrix, a better way to assess multi- collinearity collinearity is to compute the variance inflation factor (VIF). The VIF is variance ˆ inflation the ratio of the variance of β when fitting the full model divided by the j factor ˆ variance of β if fit on its own. The smallest possible value for VIF is 1, j which indicates the complete absence of collinearity. Typically in practice there is a small amount of collinearity among the predictors. As a rule of thumb, a VIF value that exceeds 5 or 10 indicates a proble matic amount of

117 102 3. Linear Regression collinearity. The VIF for each variable can be computed using the formula 1 ˆ )= , VIF ( β j 2 R − 1 X | X j − j 2 2 R where onto all of the other from a regression of R is the X j X X | j j − 2 is close to one, then collinearity is present, and so predictors. If R X X | j j − the VIF will be large. In the data, a regression of on Credit age , rating ,and limit balance indicates that the predictors have VIF values of 1.01, 160.67, and 160.59. As we suspected, there is considerable collinearity in the data! When faced with the problem of collinearity, there are two simple solu- tions. The first is to drop one of the problematic variables from the regres- sion. This can usually be done without much compromise to the regression fit, since the presence of collinearity implies that the information that this variable provides about the response is redundant in the presence of the , balance onto age and limit other variables. For instance, if we regress predictor, then the resulting VIF values are close to rating without the 2 R the minimum possible value of 1, and the 754 to 0 . 75. drops from 0 . So dropping rating from the set of predictors has effectively solved the collinearity problem without compromising the fit. The second solution is to combine the collinear variables together into a single predictor. For in- limit and stance, we might take the average of standardized versions of rating credit worthiness . in order to create a new variable that measures 3.4 The Marketing Plan We now briefly return to the seven questions about the data Advertising that we set out to answer at the beginning of this chapter. 1. Is there a relationship between advertising sales and budget? This question can be answered by fitting a multiple regression model , as in (3.20), and testing the sales onto TV , radio ,and newspaper of hypothesis H : β = 0. In Section 3.2.2, = β β = 0 newspaper TV radio we showed that the F-statistic ca n be used to determine whether or not we should reject this null hypothesis. In this case the p-value corresponding to the F-statistic in Table 3.6 is very low, indicating clear evidence of a relationship between advertising and sales. 2. How strong is the relationship? We discussed two measures of model accuracy in Section 3.1.3. First, the RSE estimates the standard deviation of the response from the Advertising data, the RSE is 1 , 681 population regression line. For the

118 3.4 The Marketing Plan 103 , units while the mean value for the response is 14 022, indicating a 2 percentage error of roughly 12 %. Second, the R statistic records the percentage of variability in the response that is explained by the predictors. The predictors explain almost 90% of the variance in 2 . The RSE and statistics are displayed in Table 3.6. R sales Which media contribute to sales? 3. ine the p-values associated with To answer this question, we can exam each predictor’s t-statistic (Section 3.1.2). In the multiple linear re- TV radio are low, and gression displayed in Table 3.4, the p-values for but the p-value for TV newspaper and is not. This suggests that only sales radio are related to . In Chapter 6 we explore this question in greater detail. 4. How large is the effect of each medium on sales? ˆ We saw in Section 3.1.2 that the standard error of β can be used j .Forthe Advertising data, to construct confidence intervals for β j . the 95 % confidence intervals are as follows: (0 , 0 . 049) for 043 TV , . (0 , 0 . 206) for 172 radio − 0 . 013 ,and( 0 . 011) for newspaper . The confi- , are narrow and far from zero, provid- TV and radio dence intervals for ing evidence that these media are related to .Buttheinterval sales includes zero, indicating that the variable is not statis- newspaper for tically significant given the values of TV and . radio We saw in Section 3.3.3 that collinearity can result in very wide stan- dard errors. Could collinearity be the reason that the confidence in- newspaper is so wide? The VIF scores are 1 . 005, terval associated with 1 145, and 1 . 145 for . , ,and TV newspaper , suggesting no evidence radio of collinearity. In order to assess the association of each medium individually on sales, we can perform three separat e simple linear regressions. Re- sults are shown in Tables 3.1 and 3.3. There is evidence of an ex- radio TV and sales and between tremely strong association between and . There is evidence of a mild association between newspaper sales sales and TV and radio are ignored. , when the values of How accurately can we predict future sales? 5. The response can be predicted usi ng (3.21). The accuracy associ- ated with this estimate depends on whether we wish to predict an individual response, Y = f ( X )+  , or the average response, f ( X ) (Section 3.2.2). If the former, we use a prediction interval, and if the latter, we use a confidence interval. Prediction intervals will always be wider than confidence intervals because they account for the un- certainty associated with  , the irreducible error.

119 104 3. Linear Regression Is the relationship linear? 6. In Section 3.3.3, we saw that residual plots can be used in order to identify non-linearity. If the relationships are linear, then the residual Advertising data, plots should display no pattern. In the case of the we observe a non-linear effect in Figure 3.5, though this effect could also be observed in a residual plot. In Section 3.3.2, we discussed the inclusion of transformations of the predictors in the linear regression model in order to accommodate non-linear relationships. 7. Is there synergy among the advertising media? The standard linear regression model assumes an additive relation- ship between the predictors and the response. An additive model is easy to interpret because the effect of each predictor on the response is unrelated to the values of the other predictors. However, the additive assumption may be unrealistic for certain data sets. In Section 3.3.3, we showed how to include an interaction term in the regression model in order to accommodate non-additive relationships. A small p-value rm indicates the presence of such associated with the interaction te Advertising data may relationships. Figure 3.5 suggested that the not be additive. Including an interaction term in the model results in 2 , from around 90% to almost 97 %. R a substantial increase in 3.5 Comparison of Linear Regression with K -Nearest Neighbors As discussed in Chapter 2, linear regression is an example of a parametric approach because it assumes a linear functional form for f ( X ). Parametric methods have several advantages. They are often easy to fit, because one need estimate only a small number of coefficients. In the case of linear re- rpretations, and tests of statistical gression, the coefficients have simple inte significance can be easily performed. But parametric methods do have a disadvantage: by construction, they make strong assumptions about the f ( X ). If the specified functional form is far from the truth, and form of prediction accuracy is our goal, then the parametric method will perform poorly. For instance, if we assume a linear relationship between X and Y but the true relationship is far from linear, then the resulting model will provide a poor fit to the data, and any conclusions drawn from it will be suspect. non-parametric methods do not explicitly assume a para- In contrast, metric form for f ( X ), and thereby provide an alternative and more flexi- ble approach for performing regression. We discuss various non-parametric methods in this book. Here we consider one of the simplest and best-known non-parametric methods, K -nearest neighbors regression (KNN regression). K -nearest neighbors regression

120 3.5 Comparison of Linear Regression with K 105 -Nearest Neighbors y y 2 2 x x x x 1 1 ˆ FIGURE 3.16. ( X ) using KNN regression on a two-dimensional data Plots of f observations (orange dots). Left: K =1 results in a rough step func- 64 set with Right: tion fit. =9 produces a much smoother fit. K The KNN regression method is closely related to the KNN classifier dis- K x cussed in Chapter 2. Given a value for and a prediction point ,KNN 0 regression first identifies the K training observations that are closest to x , represented by ) using the average of all the ( .Itthenestimates f N x 0 0 0 training responses in N .Inotherwords, 0 ∑ 1 ˆ x ( f )= . y 0 i K x ∈N 0 i p = 2 predictors. Figure 3.16 illustrates two KNN fits on a data set with The fit with = 1 is shown in the left-hand panel, while the right-hand K panel corresponds to K = 9. We see that when K = 1, the KNN fit perfectly interpolates the training observations, and consequently takes the form of a step function. When K = 9, the KNN fit still is a step function, but averaging over nine observations results in much smaller regions of constant prediction, and consequently a smoother fit. In general, the optimal value K will depend on the bias-variance tradeoff , which we introduced in for K provides the most flexible fit, which will Chapter 2. A small value for have low bias but high variance. This variance is due to the fact that the prediction in a given region is entirely dependent on just one observation. In contrast, larger values of K provide a smoother and less variable fit; the prediction in a region is an average of several points, and so changing one observation has a smaller effect. However, the smoothing may cause bias by masking some of the structure in f ( X ). In Chapter 5, we introduce several approaches for estimating test error rates. These methods can be used to identify the optimal value of K in KNN regression.

121 106 3. Linear Regression In what setting will a parametric approach such as least squares linear re- gression outperform a non-parametric approach such as KNN regression? The answer is simple: the parametric approach will outperform the non- parametric approach if the parametric form that has been selected is close f to the true form of . Figure 3.17 provides an example with data generated from a one-dimensional linear regression model. The black solid lines rep- ( X ), while the blue curves correspond to the KNN fits using resent =1 f K K =9.Inthiscase,the K = 1 predictions are far too variable, while and the smoother = 9 fit is much closer to f ( X ). However, since the true K relationship is linear, it is hard for a non-parametric approach to compete with linear regression: a non-parametric approach incurs a cost in variance that is not offset by a reduction in bias. The blue dashed line in the left- hand panel of Figure 3.18 represents the linear regression fit to the same data. It is almost perfect. The right-hand panel of Figure 3.18 reveals that linear regression outperforms KNN for t his data. The green solid line, plot- /K ted as a function of 1 , represents the test set mean squared error (MSE) for KNN. The KNN errors are well above the black dashed line, which is the test MSE for linear regression. When the value of K is large, then KNN performs only a little worse than least squares regression in terms of MSE. It performs far worse when K is small. In practice, the true relationship between X Y is rarely exactly lin- and ear. Figure 3.19 examines the relative performances of least squares regres- sion and KNN under increasing levels of non-linearity in the relationship X and Y . In the top row, the true relationship is nearly linear. between In this case we see that the test MSE for linear regression is still superior to that of KNN for low values of . However, for K ≥ 4, KNN out- K performs linear regression. The second row illustrates a more substantial deviation from linearity. In this situation, KNN substantially outperforms linear regression for all values of K . Note that as the extent of non-linearity increases, there is little change in the test set MSE for the non-parametric KNN method, but there is a large increase in the test set MSE of linear regression. Figures 3.18 and 3.19 display situations in which KNN performs slightly worse than linear regression when the r elationship is linear, but much better than linear regression for non-linear situations. In a real life situation in which the true relationship is unknown, one might draw the conclusion that KNN should be favored over linear regression because it will at worst be slightly inferior than linear regression if the true relationship is linear, and may give substantially better results if the true relationship is non-linear. But in reality, even when the true relat ionship is highly non-linear, KNN may still provide inferior results to linear regression. In particular, both Figures 3.18 and 3.19 illustrate settings with p = 1 predictor. But in higher dimensions, KNN often performs worse than linear regression. Figure 3.20 considers the same strongly non-linear situation as in the second row of Figure 3.19, except that we have added additional noise

122 3.5 Comparison of Linear Regression with K 107 -Nearest Neighbors y y 1234 1234 −0.5 0.0 0.5 1.0 0.5 −0.5 1.0 −1.0 −1.0 0.0 x x ˆ X f ( FIGURE 3.17. ) Plots of using KNN regression on a one-dimensional data set with observations. The true relationship is given by the black solid line. 100 Thebluecurvecorrespondsto =1 and interpolates (i.e. passes directly Left: K Right: The blue curve corresponds to K through) the training data. ,and =9 represents a smoother fit. 0.15 0.10 y Mean Squared Error 0.05 1234 0.00 1.0 0.5 0.0 −0.5 −1.0 0.2 0.5 1.0 1/K x The same data set shown in Figure 3.17 is investigated further. FIGURE 3.18. The blue dashed line is the least squares fit to the data. Since f Left: X ) is in ( fact linear (displayed as the black line), the least squares regression line provides a very good estimate of f ( X ) . Right: The dashed horizontal line represents the least squares test set MSE, while the green solid line corresponds to the MSE 1 /K (on the log scale). Linear regression achieves a for KNN as a function of lower test MSE than does KNN regression, since f ( X ) is in fact linear. For KNN regression, the best results occur with a very large value of K , corresponding to a small value of 1 /K .

123 108 3. Linear Regression 3.5 0.08 3.0 0.06 2.5 2.0 0.04 Mean Squared Error 1.5 0.02 1.0 0.00 0.5 −0.5 0.5 1.0 0.2 0.5 1.0 −1.0 0.0 x 1/K 0.15 3.5 3.0 0.10 2.5 yy 2.0 0.05 Mean Squared Error 1.5 1.0 0.00 0.2 0.5 0.5 0.0 1.0 −0.5 1.0 −1.0 1/K x Top Left: In a setting with a slightly non-linear relationship FIGURE 3.19. (blue) and X Y (solid black line), the KNN fits with K =1 and K =9 between (red) are displayed. Top Right: For the slightly non-linear data, the test set MSE for least squares regression (horizontal black) and KNN with various values of 1 /K Bottom Left and Bottom Right: As in the top panel, (green) are displayed. X and . but with a strongly non-linear relationship between Y ated with the response. When =1or p =2, predictors that are not associ p p KNN outperforms linear regression. But for = 3 the results are mixed, and for ≥ 4 linear regression is superior to KNN. In fact, the increase in p dimension has only caused a small deterioration in the linear regression test set MSE, but it has caused more than a ten-fold increase in the MSE for KNN. This decrease in performance as the dimension increases is a common problem for KNN, and results from the fact that in higher dimensions there is effectively a reduction in sa mple size. In this data set there are 100 training observations; when p = 1, this provides enough information to accurately estimate f X ). However, spreading 100 observations over p =20 ( dimensions results in a phenomenon in which a given observation has no nearby neighbors —this is the so-called curse of dimensionality .Thatis, curse of di- may be the K observations that are nearest to a given test observation x mensionality 0 very far away from x in p -dimensional space when p is large, leading to a 0

124 3.6 Lab: Linear Regression 109 p=20 p=4 p=10 p=3 p=2 p=1 1.0 1.0 1.0 1.0 1.0 1.0 0.8 0.8 0.8 0.8 0.8 0.8 0.6 0.6 0.6 0.6 0.6 0.6 0.4 0.4 0.4 0.4 0.4 0.4 0.2 0.2 0.2 0.2 0.2 0.2 Mean Squared Error 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.5 1.0 0.5 1.0 0.5 1.0 0.2 0.2 0.5 1.0 0.5 1.0 0.2 0.2 0.5 1.0 0.2 1/K FIGURE 3.20. Test MSE for linear regression (black dashed lines) and KNN p (green curves) as the number of variables increases. The true function is non– linear in the first variable, as in the lower panel in Figure 3.19, and does not depend on the additional variables. The performance of linear regression deteri- orates slowly in the presence of these additional noise variables, whereas KNN’s performance degrades much more quickly as p increases. f ( x ) and hence a poor KNN fit. As a general rule, very poor prediction of 0 parametric methods will tend to outperform non-parametric approaches bservations per predictor. when there is a small number of o Even in problems in which the dimension is small, we might prefer linear regression to KNN from an interpretability standpoint. If the test MSE of KNN is only slightly lower than that of linear regression, we might be willing to forego a little bit of prediction accuracy for the sake of a simple model that can be described in terms of just a few coefficients, and for which p-values are available. 3.6 Lab: Linear Regression 3.6.1 Libraries The library() function is used to load libraries , or groups of functions and library() data sets that are not included in the base distribution. Basic functions R that perform least squares linear regression and other simple analyses come standard with the base distribution, but more exotic functions require ad- package, which is a very large MASS ditional libraries. Here we load the collection of data sets and functions. We also load the ISLR package, which includes the data sets associated with this book. > library(MASS) > library(ISLR) If you receive an error message when l oading any of these libraries, it likely indicates that the corresponding library has not yet been installed MASS , come with R and do not need to on your system. Some libraries, such as be separately installed on your computer. However, other packages, such as

125 110 3. Linear Regression , must be downloaded the first time they are used. This can be done di- ISLR rectly from within Install . For example, on a Windows system, select the R package option under the tab. After you select any mirror site, a Packages list of available packages will appear. Simply select the package you wish to R will automatically download the package. Alternatively, this install and can be done at the install.packages("ISLR") . This in- R command line via stallation only needs to be done the first time you use a package. However, library() function must be called each time you wish to use a given the package. 3.6.2 Simple Linear Regression MASS The Boston data set, which records medv (median library contains the house value) for 506 neighborhoods around Boston. We will seek to predict using 13 predictors such as (average number of rooms per house), medv rm age (average age of houses), and lstat (percent of households with low socioeconomic status). > fix(Boston) > names(Boston) [1] "crim" "zn" "indus" "chas" "nox" "rm" "age" [8] "dis" "rad" "tax" "ptratio" "black" "lstat" "medv" To find out more about the data set, we can type ?Boston . We will start by using the lm() function to fit a simple linear regression lm() model, with medv as the response and lstat as the predictor. The basic lm(y x,data) ,where y is the response, x is the predictor, and ∼ syntax is is the data set in which these two variables are kept. data ∼ lstat) > lm.fit=lm(medv expr, envir, enclos) : Object "medv" not found Error in eval( R The command causes an error because does not know where to find the variables medv and lstat . The next line tells R that the variables are now Boston . If we attach Boston , the first line works fine because R in recognizes the variables. ∼ data=Boston) > lm.fit=lm(medv lstat, > attach(Boston) > lm.fit=lm(medv lstat) ∼ If we type lm.fit , some basic information about the model is output. For more detailed information, we use summary(lm.fit) . This gives us p- 2 values and standard errors for the coefficients, as well as the R statistic and F-statistic for the model. >lm.fit Call: lm(formula = medv ∼ lstat)

126 3.6 Lab: Linear Regression 111 Coefficients: (Intercept) lstat 34.55 -0.95 > summary(lm.fit) Call: lm(formula = medv lstat) ∼ Residuals: Min 1Q Median 3Q Max -15.17 -3.99 -1.32 2.03 24.50 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 34.5538 0.5626 61.4 <2e-16 *** lstat -0.9500 0.0387 -24.5 <2e-16 *** --- Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1 Residual standard error: 6.22 on 504 degrees of freedom Multiple R-squared: Adjusted R-squared: 0.543 0.544, F-statistic: 602 on 1 and 504 DF, p-value: <2e-16 We can use the function in order to find out what other pieces names() names() lm.fit of information are stored in . Although we can extract these quan- tities by name—e.g. lm.fit$coefficients —it is safer to use the extractor functions like coef() to access them. coef() > names(lm.fit) residuals" "effects" [1] "coefficients" " [4] "rank" "fitted.values" "assign" [7] "qr" "df.residual" "xlevels" [10] "call" "terms" "model" > coef(lm.fit) (Intercept) lstat 34.55 -0.95 In order to obtain a confidence interval for the coefficient estimates, we can use the confint() command. confint() > confint(lm.fit) 2.5 % 97.5 % (Intercept) 33.45 35.659 lstat -1.03 -0.874 The function can be used to produce confidence intervals and predict() predict() prediction intervals for the prediction of medv for a given value of lstat . > predict(lm.fit,data.frame(lstat=(c(5,10 ,15))), interval="confidence ") fit lwr upr 1 29.80 29.01 30.60 2 25.05 24.47 25.63 3 20.30 19.73 20.87

127 112 3. Linear Regression ,15))), > predict(lm.fit,data.frame(lstat=(c(5,10 interval="prediction ") fit lwr upr 1 29.80 17.566 42.04 2 25.05 12.828 37.28 3 20.30 8.078 32.53 For instance, the 95 % confidence interval associated with a lstat value of 47 , 25 10 is (24 63), and the 95 % prediction interval is (12 . 828 , 37 . 28). As . . expected, the confidence and prediction intervals are centered around the lstat medv equals 10), but when 05 for same point (a predicted value of 25 . the latter are substantially wider. and lstat along with the least squares regression medv We will now plot line using the and abline() functions. plot() abline() > plot(lstat, medv) > abline(lm.fit) There is some evidence for non-linearity in the relationship between lstat and . We will explore this issue later in this lab. medv The abline() function can be used to draw any line, not just the least ,we a and slope b squares regression line. To draw a line with intercept type . Below we experiment with some additional settings for abline(a,b) command causes the width of the lwd=3 plotting lines and points. The regression line to be increased by a factor of 3; this works for the plot() and functions also. We can also use the pch option to create different lines() plotting symbols. > abline(lm.fit,lwd=3) > abline(lm.fit,lwd=3,col="red") > plot(lstat,medv,col="red") > plot(lstat,medv,pch =20) > plot(lstat,medv,pch ="+") > plot(1:20,1:20,pch=1:20) Next we examine some diagnostic plots, several of which were discussed in Section 3.3.3. Four diagnostic plots are automatically produced by ap- plying the plot() function directly to the output from lm() . In general, this command will produce one plot at a time, and hitting Enter will generate the next plot. However, it is often convenient to view all four plots together. to split the function, which tells R par() We can achieve this by using the par() display screen into separate panels so that multiple plots can be viewed si- multaneously. For example, par(mfrow=c(2,2)) divides the plotting region into a 2 × 2gridofpanels. > par(mfrow=c(2,2)) > plot(lm.fit) Alternatively, we can compute the residuals from a linear regression fit using the residuals() function. The function rstudent() will return the residuals() studentized residuals, and we can use this function to plot the residuals rstudent() against the fitted values.

128 3.6 Lab: Linear Regression 113 > plot(predict(lm.fit), residuals(lm.fit)) > plot(predict(lm.fit), rstudent(lm.fit)) On the basis of the residual plots, there is some evidence of non-linearity. Leverage statistics can be computed fo r any number of predictors using the hatvalues() function. hatvalues() > plot(hatvalues(lm.fit)) > which.max(hatvalues(lm.fit)) 375 function identifies the index of the largest element of a The which.max() which.max() vector. In this case, it tells us which o bservation has the largest leverage statistic. 3.6.3 Multiple Linear Regression In order to fit a multiple linear regression model using least squares, we again use the function. The syntax ∼ x1+x2+x3) is used to fit a lm() lm(y x1 function now x2 ,and x3 .The summary() , model with three predictors, outputs the regression coeffici ents for all the predictors. > lm.fit=lm(medv ∼ lstat+age,data=Boston) > summary(lm.fit) Call: ∼ lstat + age, data = Boston) lm(formula = medv Residuals: Min 1Q Median 3Q Max 1.97 23.16 -15.98 -3.98 -1.28 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 33.2228 0.7308 45.46 <2e-16 *** lstat -1.0321 0.0482 -21.42 <2e-16 *** age 0.0345 0.0122 2.83 0.0049 ** --- Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1 Residual standard error: 6.17 on 503 degrees of freedom 0.551, Adjusted R-squared: 0.549 Multiple R-squared: F-statistic: 309 on 2 and 503 DF, p-value: <2e-16 The Boston data set contains 13 variables, and so it would be cumbersome to have to type all of these in order to perform a regression using all of the predictors. Instead, we can use the following short-hand: > lm.fit=lm(medv ∼ .,data=Boston) > summary(lm.fit) Call: lm(formula = medv ∼ ., data = Boston)

129 114 3. Linear Regression Residuals: Min 1Q Median 3Q Max 1.777 26.199 -15.594 -2.730 -0.518 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.646e+01 5.103e+00 7.144 3.28e-12 *** crim -1.080e-01 3.286e-02 - 3.287 0.001087 ** zn 4.642e-02 1.373e-02 3.382 0.000778 *** indus 2.056e-02 6.150e-02 0.334 0.738288 chas 2.687e+00 8.616e-01 3.118 0.001925 ** 4.25e-06 *** 4.651 nox -1.777e+01 3.820e+00 - rm 3.810e+00 4.179e-01 9.116 < 2e-16 *** age 6.922e-04 1.321e-02 0.052 0.958229 dis -1.476e+00 1.995e-01 - 7.398 6.01e-13 *** rad 3.060e-01 6.635e-02 4.613 5.07e-06 *** 3.280 0.001112 ** tax -1.233e-02 3.761e-03 - 7.283 1.31e-12 *** ptratio -9.527e-01 1.308e-01 - black 9.312e-03 2.686e-03 3.467 0.000573 *** lstat -5.248e-01 5.072e-02 -1 0.347 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 4.745 on 492 degrees of freedom Multiple R-Squared: 0.7406, Adjusted R-squared: 0.7338 F-statistic: 108.1 on 13 and 492 DF, p-value: < 2.2e-16 We can access the individual components of a summary object by name (type ?summary.lm to see what is available). Hence summary(lm.fit)$r.sq 2 R gives us the ,and summary(lm.fit)$sigma gives us the RSE. The vif() vif() function, part of the package, can be used to compute variance inflation car factors. Most VIF’s are low to moderate for this data. The car package is R installation so it must be downloaded the first time not part of the base you use it via the install.packages R . option in > library(car) > vif(lm.fit) crim zn indus chas nox rm age 1.79 2.30 3.99 1.07 4.39 1.93 3.10 dis rad tax ptratio black lstat 3.96 7.48 9.01 1.80 1.35 2.94 What if we would like to perform a regression using all of the variables but age has a high p-value. one? For example, in the above regression output, So we may wish to run a regression excluding this predictor. The following age . syntax results in a regression using all predictors except > lm.fit1=lm(medv ∼ .-age,data=Boston) > summary(lm.fit1) ... Alternatively, the update() function can be used. update()

130 3.6 Lab: Linear Regression 115 ∼ .-age) > lm.fit1=update(lm.fit, 3.6.4 Interaction Terms lm() func- It is easy to include interaction terms in a linear model using the tion. The syntax lstat:black tells R to include an interaction term between black lstat lstat*age simultaneously includes lstat , age , and .Thesyntax and the interaction term lstat age as predictors; it is a shorthand for × lstat+age+lstat:age . ∼ > summary(lm(medv lstat*age,data=Boston)) Call: lm(formula = medv ∼ lstat * age, data = Boston) Residuals: Min 1Q Median 3Q Max -15.81 -4.04 -1.33 2.08 27.55 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 36.088536 1.469835 24.55 < 2e-16 *** lstat -1.392117 0.167456 -8.31 8.8e-16 *** age -0.000721 0.019879 -0.04 0.971 lstat:age 0.004156 0.001852 2.24 0.025 * --- Signif. codes: 0 ’***’ 0.001 ’**’ 0.01 ’*’ 0.05 ’.’ 0.1 ’ ’ 1 Residual standard error: 6.15 on 502 degrees of freedom Multiple R-squared: Adjusted R-squared: 0.553 0.556, F-statistic: 209 on 3 and 502 DF, p-value: <2e-16 3.6.5 Non-linear Transformations of the Predictors lm() function can also accommodate non-linear transformations of the The 2 , we can create a predictor X predictors. For instance, given a predictor X I(X^2) . The function I() is needed since the ^ has a special meaning using I() in a formula; wrapping as we do allows the standard usage in R ,whichis to raise X 2 . We now perform a regression of medv onto to the power lstat 2 and lstat . > lm.fit2=lm(medv ∼ lstat+I(lstat^2)) > summary(lm.fit2) Call: lm(formula = medv ∼ lstat + I(lstat^2)) Residuals: Min 1Q Median 3Q Max -15.28 -3.83 -0.53 2.31 25.41

131 116 3. Linear Regression Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 42.86201 0.87208 49.1 <2e-16 *** lstat -2.33282 0.12380 -18.8 <2e-16 *** I(lstat^2) 0.04355 0.00375 11.6 <2e-16 *** --- Signif. codes: 0 ’***’ 0.001 ’**’ 0.01 ’*’ 0.05 ’.’ 0.1 ’ ’ 1 Residual standard error: 5.52 on 503 degrees of freedom Multiple R-squared: 0.641, Adjusted R-squared: 0.639 F-statistic: 449 on 2 and 503 DF, p-value: <2e-16 The near-zero p-value associated with the quadratic term suggests that it leads to an improved model. We use the anova() function to further anova() quantify the extent to which the quadratic fit is superior to the linear fit. ∼ lstat) > lm.fit=lm(medv > anova(lm.fit,lm.fit2) Analysis of Variance Table lstat ∼ Model 1: medv Model 2: medv ∼ lstat + I(lstat^2) Res.Df RSS Df Sum of Sq F Pr(>F) 1 504 19472 2 503 15347 1 4125 135 <2e-16 *** --- ’***’ 0.001 ’**’ 0.01 ’*’ 0.05 ’.’ 0.1 ’ ’ 1 Signif. codes: 0 Here Model 1 represents the linear submodel containing only one predictor, lstat , while Model 2 corresponds to the larger quadratic model that has two 2 predictors, and lstat lstat .The anova() function performs a hypothesis test comparing the two models. The null hypothesis is that the two models fit the data equally well, and the alternative hypothesis is that the full model is superior. Here the F-statistic is 135 and the associated p-value is virtually zero. This provides very clear evidence that the model containing 2 lstat and lstat is far superior to the model that only the predictors contains the predictor . This is not surprising, since earlier we saw lstat medv and lstat .Ifwe evidence for non-linearity in the relationship between type > par(mfrow=c(2,2)) > plot(lm.fit2) 2 lstat term is included in the model, there is then we see that when the little discernible pattern in the residuals. In order to create a cubic fit, we can include a predictor of the form I(X^3) . However, this approach can star t to get cumbersome for higher- order polynomials. A better approach involves using the poly() function poly() to create the polynomial within lm() . For example, the following command produces a fifth-order polynomial fit:

132 3.6 Lab: Linear Regression 117 ∼ poly(lstat ,5)) > lm.fit5=lm(medv > summary(lm.fit5) Call: lm(formula = medv poly(lstat, 5)) ∼ Residuals: Min 1Q Median 3Q Max -13.543 -3.104 -0.705 2.084 27.115 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 22.533 0.232 97.20 < 2e-16 *** poly(lstat, 5)1 -152.460 5.215 - 29.24 < 2e-16 *** poly(lstat, 5)2 64.227 5.215 12.32 < 2e-16 *** 5.215 -5.19 3.1e-07 *** poly(lstat, 5)3 -27.051 poly(lstat, 5)4 25.452 5.215 4.88 1.4e-06 *** poly(lstat, 5)5 -19.252 5.215 -3.69 0.00025 *** --- Signif. codes: 0 ’***’ 0.001 ’**’ 0.01 ’*’ 0.05 ’.’ 0.1 ’ ’ 1 Residual standard error: 5.21 on 500 degrees of freedom 0.682, Adjusted R-squared: 0.679 Multiple R-squared: F-statistic: 214 on 5 and 500 DF, p-value: <2e-16 This suggests that including additional polynomial terms, up to fifth order, leads to an improvement in the model fit! However, further investigation of the data reveals that no polynomial terms beyond fifth order have signifi- cant p-values in a regression fit. Of course, we are in no way restricted to using polynomial transforma- tions of the predictors. Here we try a log transformation. ∼ > summary(lm(medv log(rm),data=Boston)) ... 3.6.6 Qualitative Predictors Carseats data, which is part of the ISLR We will now examine the library. Sales (child car seat sales) in 400 locations We will attempt to predict based on a number of predictors. > fix(Carseats) > names(Carseats) [1] "Sales" "CompPrice" "Income" "Advertising" [5] "Population" "Price" "ShelveLoc" "Age" [9] "Education" "Urban" "US" The Carseats data includes qualitative predictors such as Shelveloc ,anin- dicator of the quality of the shelving location—that is, the space within a store in which the car seat is displayed—at each location. The pre- . Shelveloc takes on three possible values, Bad , Medium ,and Good dictor

133 118 3. Linear Regression Shelveloc , generates dummy variables Given a qualitative variable such as R automatically. Below we fit a multiple regression model that includes some interaction terms. > lm.fit=lm(Sales .+Income:Advertising +Price:Age,data=Carseats) ∼ > summary(lm.fit) Call: . + Income:Advertising + Price:Age, data = lm(formula = Sales ∼ Carseats) Residuals: Min 1Q Median 3Q Max 0.018 0.675 3.341 -2.921 -0.750 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 6.575565 1.008747 6.52 2.2e-10 *** CompPrice 0.092937 0.004118 22.57 < 2e-16 *** Income 0.010894 0.002604 4.18 3.6e-05 *** Advertising 0.070246 0.022609 3.11 0.00203 ** Population 0.000159 0.000368 0.43 0.66533 Price -0.100806 0.007440 -13.55 < 2e-16 *** 4.848676 0.152838 31.72 < 2e-16 *** ShelveLocGood 1.953262 0.125768 15.53 < 2e-16 *** ShelveLocMedium Age -0.057947 0.015951 -3.63 0.00032 *** Education -0.020852 0.019613 -1.06 0.28836 UrbanYes 0.140160 0.112402 1.25 0.21317 USYes -0.157557 0.148923 -1.06 0.29073 Income:Advertising 0.000751 0.000278 2.70 0.00729 ** Price:Age 0.000107 0.000133 0.80 0.42381 --- Signif. codes: 0 ’***’ 0.001 ’**’ 0.01 ’*’ 0.05 ’.’ 0.1 ’ ’ 1 Residual standard error: 1.01 on 386 degrees of freedom 0.876, Multiple R-squared: Adjusted R-squared: 0.872 F-statistic: 210 on 13 and 386 DF, p-value: <2e-16 The contrasts() function returns the coding that R uses for the dummy contrasts() variables. > attach(Carseats) > contrasts(ShelveLoc) Good Medium Bad 0 0 Good 1 0 Medium 0 1 Use ?contrasts to learn about other contrasts, and how to set them. R has created a ShelveLocGood dummy variable that takes on a value of 1 if the shelving location is good, a nd 0 otherwise. It has also created a ShelveLocMedium dummy variable that equals 1 if the shelving location is medium, and 0 otherwise. A bad shelving location corresponds to a zero for each of the two dummy variables. The fact that the coefficient for

134 3.6 Lab: Linear Regression 119 in the regression output is positive indicates that a good ShelveLocGood shelving location is associated with high sales (relative to a bad location). And has a smaller positive coefficient, indicating that a ShelveLocMedium medium shelving location leads to higher sales than a bad shelving location but lower sales than a good shelving location. 3.6.7 Writing Functions comes with many useful functions, and still more func- As we have seen, R libraries. However, we will often be inter- R tions are available by way of ested in performing an operation for which no function is available. In this setting, we may want to write our own function. For instance, below we ISLR and libraries, called MASS provide a simple function that reads in the . Before we have created the function, returns an error if LoadLibraries() R we try to call it. > LoadLibraries Error: object ’LoadLibraries ’ not found > LoadLibraries() Error: could not find function " LoadLibraries" We now create the function. Note that the + symbols are printed by R and should not be typed in. The { symbol informs R that multiple commands { will cause R to print the are about to be input. Hitting Enter after typing symbol. We can then input as many commands as we wish, hitting + Enter after each one. Finally the symbol informs } that no further commands R will be entered. > LoadLibraries= function(){ + library(ISLR) + library(MASS) + print("The libraries have been loaded.") +} Now if we type in LoadLibraries , R will tell us what is in the function. > LoadLibraries function(){ library(ISLR) library(MASS) print("The libraries have been loaded.") } If we call the function, the libraries are loaded in and the print statement is output. > LoadLibraries() [1] "The libraries have been loaded."

135 120 3. Linear Regression 3.7 Exercises Conceptual 1. Describe the null hypotheses to which the p-values given in Table 3.4 correspond. Explain what conclusions you can draw based on these sales , , TV p-values. Your explanation should be phrased in terms of , rather than in terms of the coefficients of the ,and radio newspaper linear model. 2. Carefully explain the differences between the KNN classifier and KNN regression methods. X 3. Suppose we have a data set with five predictors, =GPA, X =IQ, 1 2 X = Gender (1 for Female and 0 for Male), X = Interaction between 4 3 = Interaction between GPA and Gender. The X GPA and IQ, and 5 response is starting salary after graduation (in thousands of dollars). ˆ ˆ Suppose we use least squares to fit the model, and get β = =50 β , 1 0 ˆ ˆ ˆ ˆ β =0 . 07 , 10. β − =35 , 01 β , =0 . = β , 20 4 3 2 5 (a) Which answer is correct, and why? i. For a fixed value of IQ and GPA, males earn more on average than females. ii. For a fixed value of IQ and GPA, females earn more on average than males. iii. For a fixed value of IQ and GPA, males earn more on average than females provided that the GPA is high enough. iv. For a fixed value of IQ and GPA, females earn more on average than males provided that the GPA is high enough. . 0. (b) Predict the salary of a female with IQ of 110 and a GPA of 4 (c) True or false: Since the coefficient for the GPA/IQ interaction term is very small, there is very little evidence of an interaction effect. Justify your answer. 4. I collect a set of data ( n = 100 observations) containing a single predictor and a quantitative response. I then fit a linear regression Y = model to the data, as well as a separate cubic regression, i.e. 3 2 + β + X + β . X  + β X β 3 1 0 2 (a) Suppose that the true relationship between X and Y is linear, i.e. Y = β . Consider the training residual sum of + β  X + 1 0 squares (RSS) for the linear regression, and also the training RSS for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer.

136 3.7 Exercises 121 (b) Answer (a) using test rather than training RSS. (c) Suppose that the true relationship between X and Y is not linear, but we don’t know how far it is from linear. Consider the training RSS for the linear regression, and also the training RSS for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer. (d) Answer (c) using test rather than training RSS. ult from performing linear regres- 5. Consider the fitted values that res sion without an intercept. In this setting, the i th fitted value takes the form ˆ = x β, ˆ y i i where ) ( ) ( n n ∑ ∑ 2 ˆ (3.38) / . x y x β = ′ i i i ′ i =1 i =1 Show that we can write n ∑ ′ ′ y ˆ a y . = i i i ′ =1 i ′ a What is ? i Note: We interpret this result by saying that the fitted values from linear regression are linear combinations of the response values. 6. Using (3.4), argue that in the case of simple linear regression, the x, ̄ y ). least squares line always passes through the point ( ̄ 7. It is claimed in the text that in the case of simple linear regression 2 statistic (3.17) is equal to the square of the onto X ,the R of Y correlation between X Y (3.18). Prove that this is the case. For and simplicity, you may assume that ̄ = ̄ y =0. x Applied 8. This question involves the use of simple linear regression on the Auto data set. lm() function to perform a simple linear regression with (a) Use the mpg as the response and horsepower as the predictor. Use the summary() function to print the results. Comment on the output. For example:

137 122 3. Linear Regression i. Is there a relationship between the predictor and the re- sponse? ii. How strong is the relationship between the predictor and the response? iii. Is the relationship between the predictor and the response positive or negative? mpg horsepower of associated with a iv. What is the predicted 98? What are the associated 95 % confidence and prediction intervals? (b) Plot the response and the predictor. Use the abline() function to display the least squares regression line. (c) Use the plot() function to produce diagnostic plots of the least squares regression fit. Comment on any problems you see with the fit. 9. This question involves the use of multiple linear regression on the Auto data set. (a) Produce a scatterplot matrix which includes all of the variables in the data set. (b) Compute the matrix of correlati ons between the variables using cor() . You will need to exclude the name variable, the function cor() which is qualitative. lm() function to perform a multiple linear regression (c) Use the mpg as the response and all other variables except name as with thepredictors.Usethe function to print the results. summary() Comment on the output. For instance: i. Is there a relationship between the predictors and the re- sponse? ii. Which predictors appear to have a statistically significant relationship to the response? year variable suggest? iii. What does the coefficient for the plot() function to produce diagnostic plots of the linear (d) Use the regression fit. Comment on any problems you see with the fit. Do the residual plots suggest any unusually large outliers? Does the leverage plot identify any observations with unusually high leverage? * and : symbols to fit linear regression models with (e) Use the interaction effects. Do any interac tions appear to be statistically significant? (f) Try a few different transformations of the variables, such as √ 2 . Comment on your findings. X X , X log( ),

138 3.7 Exercises 123 Carseats 10. This question should be answered using the data set. using Price , Sales (a) Fit a multiple regression model to predict US ,and Urban . (b) Provide an interpretation of each coefficient in the model. Be careful—some of the variables in the model are qualitative! (c) Write out the model in equation form, being careful to handle the qualitative variables properly. (d) For which of the predictors can you reject the null hypothesis : β =0? H j 0 (e) On the basis of your response to the previous question, fit a smaller model that only uses the predictors for which there is evidence of associat ion with the outcome. (f) How well do the models in (a) and (e) fit the data? (g) Using the model from (e), obtain 95 % confidence intervals for the coefficient(s). (h) Is there evidence of outliers or high leverage observations in the model from (e)? 11. In this problem we will investigate the t-statistic for the null hypoth- : β = 0 in simple linear regression without an intercept. To esis H 0 begin, we generate a predictor and a response y x as follows. > set.seed(1) > x=rnorm(100) > y=2*x+rnorm(100) (a) Perform a simple linear regression of y x , without an in- onto ˆ β tercept. Report the coefficient estimate , the standard error of t-statistic and p-value associ- this coefficient estimate, and the = 0. Comment on these : β ated with the null hypothesis H 0 results. (You can perform regre ssion without an intercept using the command lm(y ∼ x+0) .) y without an x onto (b) Now perform a simple linear regression of intercept, and report the coeffici ent estimate, its standard error, and the corresponding t-statistic and p-values associated with = 0. Comment on these results. β : H the null hypothesis 0 (c) What is the relationship between the results obtained in (a) and (b)? Y onto X without an intercept, the t- (d) For the regression of ˆ ˆ ˆ : β =0takestheform β β/ SE( is β ), where H statistic for 0 given by (3.38), and where √ ∑ n 2 ˆ x ( y − β ) i i =1 i ˆ ∑ β SE( )= . n 2 − n ( 1) x ′ ′ =1 i i

139 124 3. Linear Regression (These formulas are slightly different from those given in Sec- tions 3.1.1 and 3.1.2, since here we are performing regression ebraically, and confirm numeri- without an intercept.) Show alg R , that the t-statistic can be written as cally in √ ∑ n − y n x 1) ( i i =1 i √ . ∑ ∑ ∑ n n n 2 2 2 ′ ′ )( y y x ) − ( ( x ) ′ ′ ′ i i i =1 =1 =1 i i i i (e) Using the results from (d), argue that the t-statistic for the re- gression of y x is the same as the t-statistic for the regression onto of onto y . x (f) In R with an intercept, , show that when regression is performed y = 0 is the same for the regression of : β H the t-statistic for 1 0 x as it is for the regression of x onto y . onto 12. This problem involves simple linear regression without an intercept. ˆ β forthelinearregressionof (a) Recall that the coefficient estimate Y X without an intercept is given by (3.38). Under what onto circumstance is the coefficient estimate for the regression of X onto Y the same as the coefficient est imate for the regression of Y onto ? X (b) Generate an example in R = 100 observations in which with n onto is different X Y the coefficient estimate for the regression of from onto X . the coefficient estimate for the regression of Y (c) Generate an example in with n = 100 observations in which R the coefficient estimate for the regression of X Y is the onto the coefficient estimate for the regression of onto X . same as Y 13. In this exercise you will create some simulated data and will fit simple linear regression models to it. Make sure to use set.seed(1) prior to starting part (a) to ensure consistent results. (a) Using the rnorm() function, create a vector, x , containing 100 observations drawn from a N (0 , 1) distribution. This represents afeature, X . (b) Using the rnorm() eps , containing 100 function, create a vector, N (0 , observations drawn from a . 25) distribution i.e. a normal 0 distribution with mean zero and variance 0 . 25. (c) Using x and eps , generate a vector y according to the model (3.39) Y − 1+0 . 5 X + . = What is the length of the vector y ? What are the values of β 0 and β in this linear model? 1

140 3.7 Exercises 125 x and (d) Create a scatterplot displaying the relationship between y . Comment on what you observe. (e) Fit a least squares linear model to predict . Comment y using x ˆ ˆ β on the model obtained. How do β and compare to β and 0 1 0 β ? 1 (f) Display the least squares line on the scatterplot obtained in (d). Draw the population regression line on the plot, in a different legend() command to create an appropriate leg- color. Use the end. (g) Now fit a polynomial regression model that predicts y x using 2 x and . Is there evidence that the quadratic term improves the model fit? Explain your answer. ng the data generation process in (h) Repeat (a)–(f) after modifyi less noise in the data. The model (3.39) such a way that there is should remain the same. You can do this by decreasing the vari- sed to generate the error term ance of the normal distribution u  in (b). Describe your results. ng the data generation process in (i) Repeat (a)–(f) after modifyi more noise in the data. The model such a way that there is (3.39) should remain the same. You can do this by increasing the variance of the normal distribution used to generate the  in (b). Describe your results. error term (j) What are the confidence intervals for β based on the and β 1 0 original data set, the noisier data set, and the less noisy data set? Comment on your results. collinearity 14. This problem focuses on the problem. (a) Perform the following commands in R : > set.seed(1) > x1=runif(100) > x2=0.5*x1+rnorm(100)/10 > y=2+2*x1+0.3*x2+rnorm(100) The last line corresponds to creating a linear model in which y is x1 and x2 . Write out the form of the linear model. a function of What are the regression coefficients? (b) What is the correlation between x1 and x2 ? Create a scatterplot displaying the relationship between the variables. y using (c) Using this data, fit a least squares regression to predict ˆ ˆ x2 . Describe the results obtained. What are x1 β ,and , and β 1 0 ˆ β ?Canyou ? How do these relate to the true β β , β ,and 1 2 2 0 : β = 0? How about the null reject the null hypothesis H 1 0 hypothesis H : β =0? 0 2

141 126 3. Linear Regression y (d) Now fit a least squares regression to predict . using only x1 Comment on your results. Can you reject the null hypothesis H =0? β : 1 0 using only x2 . y (e) Now fit a least squares regression to predict Comment on your results. Can you reject the null hypothesis =0? : β H 1 0 (f) Do the results obtained in (c)–(e) contradict each other? Explain your answer. (g) Now suppose we obtain one additional observation, which was unfortunately mismeasured. > x1=c(x1, 0.1) > x2=c(x2, 0.8) > y=c(y,6) Re-fit the linear models from (c) to (e) using this new data. What effect does this new observation have on the each of the models? In each model, is this observation an outlier? A high-leverage point? Both? Explain your answers. Boston data set, which we saw in the lab 15. This problem involves the for this chapter. We will now try to predict per capita crime rate using the other variables in this data set. In other words, per capita crime rate is the response, and the other variables are the predictors. (a) For each predictor, fit a simple linear regression model to predict the response. Describe your results. In which of the models is there a statistically significant a ssociation between the predictor and the response? Create some plots to back up your assertions. (b) Fit a multiple regression model to predict the response using all of the predictors. Describe you r results. For which predictors : β =0? can we reject the null hypothesis H j 0 (c) How do your results from (a) compare to your results from (b)? Create a plot displaying the uni variate regression coefficients from (a) on the x -axis, and the multiple regression coefficients y from (b) on the -axis. That is, each predictor is displayed as a single point in the plot. Its coefficient in a simple linear regres- sion model is shown on the x -axis, and its coefficient estimate in the multiple linear regression model is shown on the y -axis. (d) Is there evidence of non-linear association between any of the predictors and the response? To answer this question, for each predictor , fit a model of the form X 3 2 X β + β + X + β . X + β = Y 2 1 0 3

142 4 Classification The linear regression model discussed in Chapter 3 assumes that the re- sponse variable Y is quantitative. But in many situations, the response . For example, eye color is qualitative, taking qualitative variable is instead qualitative on values blue, brown, or green. Often qualitative variables are referred to as categorical ; we will use these terms interchangeably. In this chapter, we study approaches for predicting qualitative responses, a process that is known as . Predicting a qualitative response for an obser- classification classification classifying that observation, since it involves vation can be referred to as assigning the observation to a category, or class. On the other hand, often the methods used for classification first predict the probability of each of the categories of a qualitative variable, as the basis for making the classi- fication. In this sense they also behave like regression methods. There are many possible classification techniques, or classifiers , that one classifier might use to predict a qualitative response. We touched on some of these in Sections 2.1.5 and 2.2.3. In this chapter we discuss three of the most widely-used classifiers: logistic regression , linear discriminant analysis ,and logistic ntensive methods in later . We discuss more computer-i -nearest neighbors K regression chapters, such as generalized additive models (Chapter 7), trees, random linear discriminant forests, and boosting (Chapter 8), and support vector machines (Chap- analysis ter 9). K -nearest neighbors G. James et al., An Introduction to Statistical Learning: with Applications in R 127 , 4, Springer Texts in Statistics 103, DOI 10.1007/978-1-4614-7138-7 © Springer Science+Business Media New York 2013

143 128 4. Classification 4.1 An Overview of Classification Classification problems occur often, p erhaps even more so than regression problems. Some examples include: ncy room with a set of symptoms 1. A person arrives at the emerge that could possibly be attributed to one of three medical conditions. Which of the three conditions does the individual have? 2. An online banking service must be able to determine whether or not a transaction being performed on the site is fraudulent, on the basis of the user’s IP address, past transaction history, and so forth. 3. On the basis of DNA sequence data for a number of patients with and without a given disease, a biologist would like to figure out which DNA mutations are deleterious (disease-causing) and which are not. Just as in the regression setting, in the classification setting we have a set of training observations ( x ,y ,y ) ) that we can use to build ( x ,..., n n 1 1 a classifier. We want our classifier to perform well not only on the training data, but also on test observations that were not used to train the classifier. In this chapter, we will illustrate the concept of classification using the Default data set. We are interested in predicting whether an simulated individual will default on his or her credit card payment, on the basis of balance. The data set is displayed annual income and monthly credit card income and monthly credit card in Figure 4.1. We have plotted annual balance for a subset of 10 , 000 individuals. The left-hand panel of Figure 4.1 displays individuals who defaulted in a given month in orange, and those who did not in blue. (The overall default rate is about 3 %, so we have plotted only a fraction of the individuals who did not default.) It appears that individuals who defaulted tended to have higher credit card balances than those who did not. In the right-hand panel of Figure 4.1, two pairs balance split by of boxplots are shown. The first shows the distribution of default variable; the second is a similar plot for .Inthis income the binary chapter, we learn how to build a model to predict default ( Y ) for any given value of balance ( X is not quantitative, )and income ( X Y ). Since 2 1 the simple linear regression model of Chapter 3 is not appropriate. It is worth noting that Figure 4.1 displays a very pronounced relation- .Inmostreal balance and the response default ship between the predictor applications, the relationship between the predictor and the response will not be nearly so strong. However, for the sake of illustrating the classifica- tion procedures discussed in this chapter, we use an example in which the relationship between the predictor and the response is somewhat exagger- ated.

144 4.2 Why Not Linear Regression? 129 2500 60000 60000 2000 1500 40000 40000 Income Income Balance 1000 20000 20000 500 0 0 0 2500 2000 0 1500 1000 500 Yes No No Yes Balance Default Default FIGURE 4.1. Default data set. Left: The annual incomes and monthly The credit card balances of a number of individuals. The individuals who defaulted on their credit card payments are shown in orange, and those who did not are shown in blue. Boxplots of balance as a function of default status. Right: Center: income as a function of default Boxplots of status. 4.2 Why Not Linear Regression? We have stated that linear regression is not appropriate in the case of a qualitative response. Why not? Suppose that we are trying to predict the medical condition of a patient in the emergency room on the basis of her symptoms. In this simplified stroke , drug overdose ,and example, there are three possible diagnoses: epileptic seizure . We could consider encoding these values as a quantita- tive response variable, Y , as follows: ⎧ ⎪ 1if stroke ; ⎨ Y = 2if drug overdose ; ⎪ ⎩ epileptic seizure . 3if Using this coding, least squares could be used to fit a linear regression model Y X to predict on the basis of a set of predictors ,...,X . Unfortunately, 1 p drug overdose in this coding implies an ordering on the outcomes, putting between stroke , and insisting that the difference epileptic seizure and between stroke and drug overdose is the same as the difference between and epileptic seizure . In practice there is no particular drug overdose reason that this needs to be the case. For instance, one could choose an equally reasonable coding, ⎧ ⎪ 1if ; epileptic seizure ⎨ Y = stroke ; 2if ⎪ ⎩ 3if drug overdose .

145 130 4. Classification which would imply a totally different relationship among the three condi- tions. Each of these codings would produce fundamentally different linear models that would ultimately lead to d ifferent sets of predictions on test observations. If the response variable’s values did take on a natural ordering, such as mild moderate ,and severe , and we felt the gap between mild and moderate , was similar to the gap between moderate and severe, then a 1, 2, 3 coding would be reasonable. Unfortunately, in general there is no natural way to convert a qualitative response variable with more than two levels into a quantitative response that is ready for linear regression. For a binary (two level) qualitative response, the situation is better. For binary instance, perhaps there are only two possibilities for the patient’s med- ical condition: and stroke . We could then potentially use drug overdose dummy variable to code the response as approach from Section 3.3.1 the follows: { ; stroke 0if = Y drug overdose . 1if We could then fit a linear regression to this binary response, and predict ˆ if 5and Y> 0 . drug overdose stroke otherwise. In the binary case it is not hard to show that even if we flip the above coding, linear regression will produce the same final predictions. For a binary response with a 0/1 coding as above, regression by least ˆ squares does make sense; it can be shown that the β obtained using linear X drug overdose | X ) in this special regression is in fact an estimate of Pr( case. However, if we use linear regre ssion, some of our estimates might be , 1] interval (see Figure 4.2), making them hard to interpret outside the [0 as probabilities! Nevertheless, the predictions provide an ordering and can be interpreted as crude probability estimates. Curiously, it turns out that the classifications that we get if we use linear regression to predict a binary response will be the same as for the linear discriminant analysis (LDA) procedure we discuss in Section 4.4. However, the dummy variable approach cannot be easily extended to accommodate qualitativ e responses with more than two levels. For these reasons, it is preferable to use a classification method that is truly suited for qualitative response values, such as the ones presented next. 4.3 Logistic Regression Consider again the data set, where the response default falls into Default one of two categories, Yes or No . Rather than modeling this response Y directly, logistic regression models the probability that Y belongs to a par- ticular category.

146 4.3 Logistic Regression 131 | | | | | | | | | | | | | | | | | | | || | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1.0 1.0 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 Probability of Default Probability of Default | ||| | | || | | || | || | || || | | || | || |||| | | | || || | || | | ||| | || | || || | || || | || || || || | | ||| | | || || | || ||| || | || | | || || | | ||| || | || | | | ||| ||| ||| || || || | | || || || ||| | | || | || | || || || ||| | || | | | | || | | || | | | || || | || | || ||| || || ||| | ||| ||| | | || || | | ||| | | || || | | | ||| |||| | || | || || | | | | | | || | || || || || | | || |||| | | || || | || | || |||| || || || | | || | || ||| | ||| | || | || | || || || | ||| | | | || || || | || | || || | || || || | || |||| || | | || || | | | | || | || | | || | || | || ||||| || || | | | || || | ||| || | | ||| || ||| || | || || || ||| | || | || | || | | | ||| || ||| | || || || || ||| | || ||| | | || | | || | | ||| | | || || || || | || | | | || || || || | | |||| || | || | | || | || | || | | | | || | ||| || | | || ||| || ||| | | | || | || | || | | || ||| | || | || | || | | | | || ||| || | | | ||| | | | || || || || | ||| || | ||| | || || || || || || | | || || | || || || || || | | | || || | | | || | || || || | | || | || | ||| | | ||| | || || || || ||| ||| || | || || | |||| || || | | | | || | || || ||| || | | || || ||| |||| | || || | || | || ||| | || || | | || | || ||| | || || | || | || | ||| | || || ||| || || ||| | || | || | | || | | | | || | ||| || || | || |||| ||| | || | || || | || || | || | | || || || || | | | | || || || | | ||| | | || | | || | | ||| || | || || || | || ||| | | | | || || || | || | | || ||| || || ||| | ||| ||| | | || || | | ||| | | || || | | | ||| | | | | | || | || || || || || | | || |||| | | || || | || | | | || || | || || ||| || ||| || | || |||| ||| || | ||| || | || || || | | | | || || | || || ||| || | | || || ||| | |||| | || || | || | || ||| | || || | | || || | || || ||| | || || | || | || | ||| ||| || ||| | || ||| | || || | | | || || | || || ||| | || |||| | || || | || ||| ||| || || | ||| | | ||| | || | || | | || | || || | || | | || || || | | | || || || || || | || || | | || || || || || || | ||| | ||||| || ||| | || || || || | | | | | | || ||| | | | || | | || || | || | ||| || | | || | | || || | | || | | | ||| || ||| | || || | | ||| || | || | | | | | || | || | || | || | | | | || | || | | || |||| || | || || || | || || || | || || | || | | ||| | || || || | || | || | ||| | ||| || || | | |||| || || || | ||| | | || | | || | || | || | | || | || |||| | | || || | || | | ||| | || | || || | || || | || || || || | | ||| | | || || | || ||| || | || | | || || | | ||| || | | | | ||| ||| ||| || || || | | || || || ||| | | || | || | || || | | || ||| ||| || || ||| || || ||| | || || || | || | || | || || | | || || | | || | || || || | ||| | | || || || || || | | ||| || | | || | | ||| | | | | || | | | | || | || | || | || || | || || || | || ||| | ||| | || | | || | | || | || | | | ||||| | || || || || || || | || | | || | ||| || | || | | || | || | || || | |||| | | || | | || | ||| | | | || | || | ||| | || || | | || || || | || || | || | || | ||| | | || | || || | || ||| | | | | || || || | | || | || || || | | ||| | || || || || | | || | || | | || | ||||| | || || | | | ||| || | ||| ||| ||| || | || | | ||| || | || || || | || | || || ||| | | | ||| |||||| || | || | || | | || | | | || | | | | || || | || | | || ||| ||| ||| || | || | | ||| || || | | ||| || || | || | || | | | || | ||||| | || | | || | | ||| || || || | | | || || | || | || |||| || |||| ||| | || | | | | || | || || | || | | ||| || || | || | ||| | || | || || | | || | ||| || || | | | || | || || || || | | | || | || | ||| | || || | || || | || | || | || | |||||| || | || | | || | || | | || || ||| || | | || |||| | || | ||| || || || | || | | ||| | || | | | || || | | | || || | || | || | || | || | | || | | || || || | || ||| | ||| | | || || | || | | || | | | || | | || ||| | | | | || | || || | | | ||| | | | | || | ||| | || | || || | || || || | || | | || || | | | | | | | || ||| || | | || | | || | || |||| | | || | || || || || || ||| || | || | | || ||| | | || | | | | | | || | || || || || || | || | | || || | || | | || | || | || || || || | | | | | | || || | || || || | || | || | ||| | ||| || | | || | |||| | | | || | ||| || || | || | | | ||| || | | || | | ||| || || | || | | || |||| | | || ||||| || || | ||| || || || | | ||| | | | | | || ||| || | | | || || | || | ||| | || || | | || | | || | || | ||| | || || || | | |||| || ||| | || ||| ||| || | || || | | || | | || || || | |||| || ||| || || | || | || || | | ||| | | | | || | | || | ||| | || || || || | || | | || || || ||||| | | || | | || || || | || || || | || || || | || | ||| || | | ||| || | || || || | || || || | || | | | || | | |||| ||| | || | |||| | || | || || | | || | || | || | || | | | || || ||| || || || | || || | || || | || | | || || | || ||| | ||| || | || || ||| | | || | | | || | | | || | | ||| || | ||| || | ||| | | | | || | | | || | || ||| || | | || || | || | | || ||| | | || || | || |||| || || ||| | || | | || ||| ||| ||| || || || ||| | || || | || | | || || | || | || || | || | || | | || || || ||| || | || | |||| || | || || | | || || | || | || || | || | | ||| | || | | || |||| || | || ||| | | | || | | | || || | || || || || | ||| | || | | || | || || | || | | | || || ||| || || | | | || || || ||| | || || || | || | | | |||| | ||| | | | || | || || || | | | | | ||| | | || || || || | || | | || | | || || | | | || ||| | | ||| | || | | | || | || | | || || | | || |||| || | | || | | | | || | | | | | || ||| |||| || | ||| ||| | | || | || | || | | || | | ||| | || || || || ||| || | || ||| | || | || ||| ||| | | ||| | | |||| | || | | || ||| | | || | || | || || |||| || | || || | || | | || | || ||| | || | || || || | || || || | || | | || | | || |||| ||| || | ||| || | ||| || | | || || || | || | || | || || | | || | | | ||| | | || ||| || | || | | || | | |||| || | || | | | | || || || | || | | | | || || | | ||| | || | | | || | | || | | || | | | || || | | || | || | ||| | | || || | || | ||| || || | ||| | | || ||| || | || | | || | || ||| | | || | | | || | || | | | | | || | | || | ||| || | | || || | | | || || || |||| || | || | | | || || | || | | | | || || | ||| || | || | || || | | || || | | | ||| | | || | || || | | || |||| | ||| | | | | || | | || | | || | |||| ||| | | ||| |||| | | | | || | || || | || || | || ||| | | || || ||| | || | | ||| | | || | | || || ||| || |||| || | || || | | || || | | || || | || | | || || | | || | || || | | | || | ||| | || |||| | || || || | | || | ||| || | || | ||| || | | || | | || || | ||| || || | || || || || || | || | ||| ||| | | || | ||| || || | ||| || | | | || | || | ||| | | || || | | | ||| | || | || | | || ||| | || ||| || || | || | || | || || || | || | | | | | | | ||| | || | || | | | | || || || || || || || | | || | || | ||| || | | || || | || | | || | || || | || | ||| | || || || | || || | | || | ||| | | || || || || || | | || || | | || | | ||| || | || || | | | || | || || | || | | | || | || | || || | | | || || | | || | || || ||| || | | || ||| | || || | | | ||| || | || | || | | || ||| | | || | || || || | | | ||| | || || | | || || | ||| | | || | | | ||| || || | | || | | | || || | || || | | || || | || || || || | || | | || || || | ||| | ||| || || || | || | ||| | | ||| || | || || | || | | | | || | || | || | ||| || | || || || || | | | || || || | || | || || | | | | ||| || | | | || || || | |||| | ||| || | || || || || | | | || | || | || | || ||| | ||| | || | || | | | | || || || || || | || | || || | || || | || | ||| || | || | || || | || || || | || || | || ||| ||| | | || || | || || | | | || | || || | | || || || | ||| || || | || | | || || || | || | || |||| | | | ||| || | | || || | || || | | |||| | || | | || || || | || ||| | | || | | || | | | | || | || | | | || || || || | || | | ||| || | | ||| || || || | || || || || | | | ||| ||| ||| || | || | ||| | |||| | || || || | || || || || | | || || | || | || | | | || | | | || | | | || || | ||| || | ||| || | | | || || ||| ||| | | || || | | || | || | |||| | || || || | | || ||| || || || | || || | || || | | || | || | || | || || || | | || | || | || || || || ||| | | || || | | || | | || || || ||| | | | | ||| | | || || |||| | || | | | | || | ||| ||| || | | || | ||| | || || || | | || | | || | | || |||| | | || | || ||| || |||| || | || ||| | | ||| || | || || ||| || | || || | || || ||| || || ||| || | || | || | | | | ||| || | || | || | | ||| |||| || | | ||| || || || | |||| | | || || || ||| | || || || | || | || || | || || | | || || || | || | || || | || | | || || || || | | | || |||| || | | ||| | | | || | || | | |||| | || | || | || || || | | || || | | || | |||| || | | | || || || | || | || ||| | || ||| | ||| | || | | || ||| | || || | || || ||| ||| | || || || | | | | | ||| || | || || || ||| | || | || | || | | | || || ||| || || || | | || || | ||||| || |||| || || | |||| || | || | | || | |||| | || || ||| ||| || || | | || | || | ||| | | | || || | || | || | || | ||| | | ||| || | || || || | | || | || || | || ||| ||| | | | || | | | || | | | || | || || || | | || | | | | | ||| | ||| | | || ||| | || || || || | | | || ||| ||| | | | | ||| | || ||| | | | ||| || || | ||| || | | ||| || | | |||| ||| || | | || || || | || | || || | ||| || | || || | || | || || | | | || ||| || || | | | | |||| | | | || | |||| || | | ||||| | ||| | | ||| || | ||| | | || | |||| || || | | ||| | || || || | || ||| | || || || || || || || | || ||| | | || || ||| | | | | | || | | || | | ||| || | | || | | | || || | |||| | || | ||| || | || ||| || ||| || | | || | || |||| || | | || | | | | || || | | ||| || | |||| | || ||| || | || | | | || | || | | ||| | || ||| || || | | ||| || | | || || || | | || | | || || || | | || || | || | || | | | || | ||| ||| || | | | || | || || | | | | || || | | || ||||| | | | | | | || | || | | | | | || | | || | || || | | || | | | | | | || | ||| || ||| || | ||| | | || | | || ||| | ||| | | | || ||| || || | || || ||| || | || || | | || | || || | | | |||| | ||| || | || | | | || | | | | || | || || || || | | | | ||| | ||| || | | | || | | | | || ||| | || | | || | | | | || | ||| || || || || | ||| | || | | | ||| | | | || || | | | || | | | || | ||||| || ||||| ||| | || | ||||| | || | || | || || || || | | | | || | ||| | || || | || || || || | | || || | | | | | || | | || ||| || | | | ||| | || || | | || || | |||| | | ||| | | || | || | | || | ||| | || || | || || || | || || | ||| || | | |||| |||| || || | |||| || |||| | | || | | || | | || || | ||| || || || ||| | || || | | || || | |||| | |||| | || || || | | ||| || || | | || || | || || | | || | | | || ||| || | | |||||| ||| | || | ||| | | || | | || ||| | | || || | || ||| | | | || | || | | | ||| | | | | ||| || | | || ||| || || || || |||| || || || ||| || | ||||| || | || | || ||| || | || | ||| || | |||| || | | || | || || || || | | || || | | | || | || || | || | ||| || | | ||| | | || || | || | | | || || | || || | | |||| || | ||| | | ||| | || | || | || ||| || || | | | |||| | || | || | | ||| | | || |||| | | | || || | ||| || || || || || | | | | | || | || || | | | || || || | || | || |||| | || || | ||| | | || || || || || | | || | || | || || || | | || | || | || | || || || | || | || | ||| | | || || || | ||| || | |||| || | || | | | | || | || || | | || || || | | || | | || | | | || | || | | || | || || | | | || | | | || | || | || | | || | || | | || || | || ||| || | | | | || | | || || | | | || || | | | | | || ||| | | | ||||| | || | | ||| ||| ||| || | || | || || | || || | | || || | || | | || | || || | ||| | | | | || || || ||| || || | |||| | || || | || | ||| || | || | | || || | | || | || || || | || | || || || | || || | | || || || | | | || | || ||| || || || | | ||| ||| | | || || | | |||| || | ||| || || | || | | |||| || | || | | | | |||| | | ||||| || ||| || || | || || || | || || | | || | | | ||| ||| ||| | | || || || || || || | || ||| || || ||| || | | || || | || ||| ||| || ||| | || || | ||||| | || | | | || || || || || | ||| || || | | | | || | | | | | ||| ||| || | | | | ||| | | ||| | ||| | | | | | || || | || | || | | | | | || | || | | | || | || || | || ||| | || | ||| || || || || | | || || | || || | || | | | | || || | || || || | ||| | | | || | | ||| || | | | |||| | | ||| || || | | |||| | |||| | || || | | | | | || | || || | | || ||| || |||| || || | ||| || || || ||| | || | |||| || ||| | ||| || || | || || | | | ||| | | | || || ||| | | | || | ||| || || || || | || || | || || | ||| || || || || || |||||| || || | | || || || || || | || | || | | || | | | | | || | | || | || | | ||| ||| || | | || | | || | || | || | | || | | || ||| || | || | ||| | || | | || | || || || || || || | || | || | | || || | | | || |||| || | | | || | | || ||| || || ||| | || | | ||| | |||| || | ||| ||| | || || || | | | || | || || | || | | | | | | || ||| || | | | ||| || || ||||| || || || || ||| | || || | | || | || | ||| |||| | || || | || | || | | || || | || || || || | | || || | | | || | | || | | | || | | | || | | ||| | || || || || | | | | ||| |||| || || || ||| || | || | | | || | |||||| | || | || ||| | | | || | ||| | | | ||| | || || || | || | | | | | ||| || ||| |||| || | || || | | | || || | | || || | || || | || || || | | || | | || | | || || ||| || | || | || | ||| | || | | | || | | ||| || | || || | ||| | ||| || || | | | || || || | || || || | || | || | | | || || ||| || ||| | ||| || || | ||||| | | || | | || || | || || | | || | || || | | ||| | ||| | | || | | || | ||| | | || | | ||| | | ||| | || | || | | || | | || ||| || | | || | || | | ||| || || || || | || | | | | || | | || || | | ||| | ||| | ||| || | | || || | || | || || || || | || | || | | || | | || | || | || || | | || ||| || | | || | | |||| | || | || || ||| || || | || || | || || || | || | || | | | | || | || | || | || || | | || | || || | ||| | | ||| | ||| || | | || | |||| || | | || || || || || | || | ||| | || || || | ||| | || || || ||| | || | || | | || | || | | | ||| || || | || ||||| || ||| | | || | || || || || || | || | | ||| || | | || ||| | || | | ||| | | || | |||| ||| | | || || || | || | || | || | || || || || || ||| | || | || | ||| | ||| || || | | || | ||| || | || | || || | | | || | || || || | || || | || | | || || | || || | || | ||| | || | | | | | || | ||| || ||| | | || || || || | || | | || | | || | | | || || || | || || | | | ||| | ||| | ||| || | | | || | ||| || | | || ||| | || || | | ||| | || | | || | || | || || | || | | || | | || | | || | || || || || || || | |||| | | || | | || || | || ||| | || | | | || || | || || || || | ||| | || || | || || || | || | | || | | | || | || || || | | || | | | | || | | | || || | || | ||| || || | || || || | || | || || || || ||| || | || | | || || || | || | | || | || || || | | || || || || | | |||| || | | | | || | | || | | | || | | || |||| || | | || | || || || | ||| | || || || | || || || || | | || | || | || ||| | ||| |||| | || || | || || || || ||| | || | || | || | | | | || || | || | || ||| || | | | ||||| | | | ||| || | || | | || | || || || | || | ||||| ||| | || | | ||| | || | || || || || || | || || || | || | ||| | | || || | || | || | | |||| | | || | || | | || | | || |||| || | || || | || | | | || | | || || ||| || || || || || | | || |||| || || || | || || | | | || | ||| | || | || | | | | | | || | || | || || | |||||| | | || | | | || | || | | || || | ||| | | ||| || | | || || | || | |||| || | | || || || | |||| | || || || | | | | | | | || | | || || | | || || | || || || | || | | | | | || | || || | | || | || || | | || | || | | | || || | || || | | || || || || | | || || || || | || | || || | || | || || | | || | | || || | | || || ||| | | | || | | || | || | | || || | | || || | || | || | ||| | | | | || | ||| | ||| | | || | | || ||| || ||| | || | ||| | | || | || | ||| || || | || |||| | | || || | || | | | | || | || || || | || | || || ||| ||| | | || | || | || |||| | || | | | || | || | || | || | || | ||| || || || | | || | || || | | || || || || | || | | ||||| || | || | | || | | |||| ||| | | || || | | | || | || || || ||| | || | || | | || || | | | ||| | | | | | || | | || || || | || || || || || |||| | | | || | || | || || || || || | || || || | || || | | || | || | ||| || || ||| | | || || || | || | || || | ||||| | ||| | ||||| | | | ||| | | || | ||| ||||| | ||| | || || | |||| | || ||| || ||| || ||| || || | || | ||| || | |||| | || || | | || | | ||| | ||| || | ||| || ||| || || | | ||| | || || || | | || || || | || || | || ||| | || | | || || | | ||| | || || || || | || || | || | ||| || || | || | | | | || | | || | | || | || | || || | | | | |||| || | | || | | | || | || |||||| | | || | || || ||| | ||| | || | || || | || || | | || || | || | || |||| || || | | || | | | || || ||||| | | || || || | ||| | || || || | | || | || || ||| | | || | || || | || || || | | | || || ||| | | || || || || | || || | | | | || || || || || | || ||||| | | || || | | || || || | | ||| || | || ||| | || | | | ||| | || | || || | || |||| || || | || | | | || | | || | | |||| | || | | |||| || | || || | || | | || | |||| | || | || | ||| ||| || | || || | ||| | || | ||| || || || | || || || ||| || | || | ||| | | | | | | || | || || | || | || ||| | | | | | |||| | || || | | || | | || | ||| | | || || | | ||| || || | | || | || || | ||| | | | || | || ||| ||| || || | | ||| || || || | || || | || || ||| | || ||| | || || || | ||| | || || | | |||| || | || || | | ||| | || | || || || | | | ||| | | || | | | || || ||| || || || || || | | || |||| || || || | || || | | | || | ||| | || | || | | | | | | || | || | || || | |||||| | | || || | | || | || | | || || | ||| | | ||| || | | || || | || | |||| || | | || || || | || | || || || | | | | | | | || | | || || | | || || | || || || | || | | | | | || | || || | || || | || | | | || | || | | | || || | || || | | || || || || | | || || || || | || | || || | || | || || | | || | | || || | | || || ||| | | | || | | || | || | | ||| || | | || || | || | || | ||| || | | | || | ||| | ||| | | | | | || ||| || ||| | || | ||| | | || | || | ||| || || | || |||| | | || || | || | | | | || | || || || | || | || || ||| ||| | | || | || || || |||| | || | | | || | || | || | || | || | ||| || || || | | ||| | || || | | || || || || | || | | ||||| || | || | | || | | |||| ||| | | || || | | | || | || || || ||| | || | || | | || || | | | ||| | | | | | || | | || || || || || || || || || |||| | | | | || | || || || || || | || || || | || || | | || | || | || || || ||| | | || || || | || | || || | ||||| | ||| | ||||| | | | | | | || | ||| ||||| | ||| | || || | |||| | || ||| || ||| || ||| || || | || | ||| || | |||| | || || | | || | || | ||| | ||| || | ||| || ||| || || | | ||| | || || || | | || ||| || | || || | || ||| | | | | || || | | ||| | || || || || | || || | | | ||| ||| || | || | | | | || | | || | | || | || | || || | | | | |||| || | | || | | | || | || |||||| | | || | || || | ||| | ||| | || | || | | || || | | || || | || | || |||| || || | | ||| | | | || | || ||||| | | || | || | ||| | || || || | | || | || || | | | || | || || | || || || | | | || || ||| | | || || || || | || || | || | | || || || || || | || ||||| | | || || | | || || || | | ||| | | || ||| | || | | | ||| | | || || | || |||| || ||| | || | | | || | | || | | |||| | || | | |||| || | || || | || | | || | |||| | || | || | ||| ||| || | || || | ||| | || | ||| || || || | || || || ||| | | || | ||| | | | | || | || | || || | || | || ||| | | | | | |||| | || || | | || | | || | ||| | | || || | | ||| || || | | || | || || | ||| | | | || | || ||| ||| || || | | ||| || || || | || || | || || ||| | || ||| | || || || | ||| | || || | | |||| || | || || | | || ||| ||| || || ||| || || ||| | || || || | || | || | || || | | || | | | || | || || || | ||| | | || || || || || | | ||| || | | || | | | | | | | || | | | | || | || | || | || || | || || || | || ||| | ||| | || | | || | | || | || | | | || || | || || || || || || | || | | || | ||| || | || | | | | || | || || | |||| | | || | || | ||| | | | || | || | ||| | || || | | || || || | || || | || | || | ||| | | || | || || | || ||| | | | | || || || | | || | || || || | | ||| || || || || || | | || | || | | || | ||||| | || || | | | ||| || | | ||| ||| || | || | | ||| || | || || || | || | || || ||| | | | ||| |||||| || | || | || | | || | | | || | | | | || || | || | | || ||| ||| ||| || | || | | ||| || || | | ||| || | | || | || | | | || | || | || | | || | | ||| || || || | | | || || | || | || |||| || |||| ||| | || | | | | || | || || | || | | | || || | ||||| | ||| | || | || || | | || | ||| || || | | | || | || | || || | || | || | || | ||| | || || | || || | || | || | ||| | |||||| || | || | | || | || | | | || ||| || | | || |||| | || | ||| || || ||| | || | | ||| | ||| || | | | || || | | | || || | || | || | || | || | | || | | || || || | || ||| | ||| | | || || | || | | || | | | || | | || ||| | | || | || | || || | | | ||| | | | | || | ||| | || | || || | || || || | || | | || || | | | | | | | || ||| || | | || | | || | || |||| | | || | || || || || || ||| || | || | | || ||| | | || | | | | || | || | || || || || || | || | | || || | || | | || | || | || || || || | | | | | | || || | | || || | || | || | ||| | ||| || | | || | |||| | | | || | ||| || || | || | | | ||| || | | || | | ||| || || | || | | || |||| | | || ||||| || || | ||| || || || | | ||| | | || | | || ||| || | | | || | | || | ||| | || || | | || | | || | || | ||| | | || || | | |||| || ||| | || ||| ||| || | || || | | || | | || || || | |||| || ||| || || | || | || || | | ||| | | || | || || | || | ||| | || || || || | || | | || || || ||||| | | || | | || || || | || || || | || || || | || | ||| ||| | | ||| || | || || || | || || || | || | | | || | | |||| ||| | || | |||| | || | || || | | || | || | || | || | | | | || ||| || || || | || || | || || | || | ||| || || | || ||| | ||| || | || || ||| | | || | | | | | | | || | | || || | ||| || | ||| | | | | || | | | || | || ||| || | | || || | || | | || ||| | | || || | || |||| || || ||| | || | | || ||| ||| ||| || || || ||| | || || | || | | || || | || | || || | || | || | | || || || || || | || | |||| ||| | || || | | || || | | | || | | || | | ||| | || | | || |||| || | || ||| | | | || | | | || || | || || || || | ||| | || | | || | || || | || | | | || || ||| || || | | | || || || || | || || || | || | | | |||| | ||| | | | || | || || || | | | | | ||| | ||| | || || || || | || | | | | || || | | | || ||| | | ||| | || | | | || | || | | || || | | || |||| || | | || | | | | || | | | | | || ||| |||| || | ||| ||| | | || | || | || | | || | | ||| | || || || || ||| || | || ||| | || | || ||| ||| | | ||| | | |||| | || | | || | | || | || | || || |||| || | || || | || | | || | || ||| | || | || || || | ||| || || | || | | || | || || |||| ||| || | ||| || | ||| || | | || || || | || | || | || || | | || | | | ||| | | || ||| || | || | | || | | |||| || | || | | | | || || || | || | | | | || || | | ||| | || | | | || | | || | | || | | | || || | | || | || | ||| | | || || | || | ||| || || | ||| | | || || || | || | | || | || ||| | | || | | | || | || | | | | | || | | || | ||| || | | || || | | | || || || |||| || | || | | | || || | || | | | | || || | | || || || | || || | || || || ||| | ||| | | | || | || || | |||||| || || || || | | || | | | | || ||| ||||| | ||| || || || || | ||| || | || || ||| | | | || ||| | | | || || | | | | || | | ||| || | | | || | | | | || | | | || | || | || | || || || || | ||| | | | || | |||| | || || | || || || || ||| | || | || || | | || || | || || | ||| || | | || | | || || | ||| || || | || || || || || | || | ||| ||| | | || | ||| | || | ||| || | | | || | | | ||| | | || || | || | ||| | || | || | | || ||| | || ||| || || | || | || | || || || | || | | | | | | | ||| | || | || | | | | || || || || || || || | | || | || | || || | ||| || || | || | | || | || || | || | ||| | || || || | || || | | | | | || | | ||| | || || | | || | ||| | | | || || || | || | | ||| || | | || || | || ||| | | | || || | ||| || | | || ||| || || | || | | || || | | || || | || | | || | | || || | | || | | | || || | || ||| | | || | | || || | || || || || || | | ||| | || | ||| || | | | || | || | || ||| | || | || | || || | |||| || | ||| || | || || || | || | | || || || | ||| | ||| || || || | || | ||| | | ||| || | || || | || | | | | || | || | || | ||| || | || || || || | | | || || || | || | || || | | | | | || | | | || || || | |||| | ||| || | || || || || | | | || | || | || | || ||| | ||| | || | || | | | | || || || || || | || | || || | || || | || | ||| || | || | || || | || || || | || | | || ||| ||| | | || || | || || | | | || | || || | | || || || | ||| || || | || | | || || || | || | || |||| | | | ||| || | | || || | || || | | |||| | || | | || || || | || ||| | | || | | || | | | | || | || | | | || || || || | || | | ||| || | | ||| || || || | || || || || | | | ||| ||| ||| || | || | ||| | |||| | || || || | || || || || | | || || | || | || | | | || | | | || | | | || || | ||| || | ||| || | | | || || ||| ||| | | || || | | || | || | | | || || || | | || ||| || || || | || || | || || | | || | || || || | || || || | | || | || | || || || || ||| | | || || | | || | | | || || || ||| | | | | ||| | | || || |||| | || | || | | || | ||| ||| || | | || | ||| | || || || | | || | | || | | || |||| | | || | || ||| || |||| || | || ||| | | ||| || | || || ||| || | || || | || || ||| || || ||| || | || | || | | | | ||| || | || | || | | ||| |||| || | | || || || || | |||| | | || || || ||| | || || || | || | || || | || || || | | || || | || | || || | || | | || | || || | | | || |||| || | | ||| | | | || | || | | |||| | || | || || || || || | | || || | | || | |||| || | | | || || || | || | || ||| | || | ||| | ||| | || |||| | || ||| | || || | || || ||| ||| | || || || | | | | | ||| || | || || || ||| | || | || | || ||| | | || || ||| || || || | | || || | ||||| || |||| || || | |||| || | || | | || | |||| | || || ||| ||| || || | | || | || | ||| | | || || || | || | || | || | ||| | | ||| || | || || || | | || | | || | || ||| ||| | | | || | | | || | | | || | | || || | | || | | | | | ||| | ||| | || || ||| | || || || || | | | || ||| ||| | | | | ||| | || ||| | | | ||| || || | ||| ||| | | ||| || | | |||| ||| || | | || || || | || | || || | ||| || | || || | || | || || | | | || ||| || || | | | || |||| | | | || | |||| || | | ||||| | ||| | | ||| || | ||| | | || | |||| || || | | ||| | || || || | || ||| || || || || || || || || | || ||| | | || || ||| | ||| | | | || | | || | | ||| || | | || | | | || || | |||| | || | ||| || | || ||| || ||| || | | || | || |||| || | | || | | | | || || | | ||| || | |||| | || ||| || | || | | | || | || | | | | || ||| || || | | ||| || | | || || || | || || | | || || || | || || || | || | | | | | || | ||| ||| || | | | || | || || | | || | || || | | || ||||| | | || | | | || | || | | | | | || | | || | || || || | || | | | | | | | | ||| || ||| || | ||| | | || | | || ||| | ||| | | | || ||| || | | || || ||| || | || || | | || | || || | | | |||| | ||| || | || | | | || | | | | || | || || || || | | | | ||| | ||| || | | | || | | | | || ||| | || |||| | || | | | | || | ||| || || || || | ||| | || | | | ||| | | | || || | | | || | | | || | ||||| || ||||| ||| | || | ||||| | || | | | || || || || | ||| | | || | ||| | || || | || || || || | | || || | | | | | || | | || ||| || | | | ||| | || || | | || || | |||| | | ||| | | || | || | | || | ||| | || || | || || || | || || | ||| || | | |||| |||| || || | |||| || |||| | | || | | || | | || || | ||| || || || ||| | || || | | || || | |||| | |||| | || || || | | ||| || || | | || || | || || | | || | | | || ||| || | | |||||| | | || | ||| | | || | | || ||| | | || || | || | | | | || | || | | | ||| | | | | ||| || | | || ||| || || || || |||| || || || ||| || | ||||| || | || | | ||| || | || || ||| || | |||| || | | || | || || || || | | || || | | | || | || || | || | ||| | | | | | | || || | || | | | || || | || || | | |||| || | ||| | | ||| | || | || | || ||| || || | | | |||| | | | || | | ||| | | || |||| | | | || || | ||| || || || || || | | | | | || | || || | | | || || || | || | || |||| | || || | ||| | | || || || || || | | || | || | || || || | | || | ||| | || | || || || | || | || | ||| | | || || || | ||| || | |||| || | || | | | | || | || || | | || || || | | || | | | | | | || | || | | || | || || | | | || | | | || | || | || | | || | || | | || || | || ||| |||| | | | | || | | || || | || | || || | | | | | || | | | | || || | || | | ||| ||| | || | || | || || | || || | | || || | || | | || | || || | ||| | | | | || || || ||| || || | |||| | || || | || | ||| || | || | | || || | | || | || || || | || | || || || | || || | | || || || | | | || | || ||| || || || | | ||| ||| | | || || | | |||| || | ||| || || | || | | |||| || | || | | | | |||| | | ||||| || ||| || || | || || || | || || | | || | | | ||| ||| ||| | | || || || || || || | || ||| || || ||| || | | || || | || ||| ||| || ||| | || || | ||||| | || | | | || || || || || | ||| || || ||| | | | || | | | | | ||| ||| || | | | | ||| | | ||| | ||| | | | | | || || | || | || | | | | | || | || | | | || | || || | || ||| | || | ||| || || || || | | || || | || || | | | | | | || || | || || || | ||| | || | || | | ||| || | | | |||| | | ||| || || | | |||| | |||| | || || | | || | | || | || || | | || ||| || |||| || || | ||| || | || ||| | || | |||| | ||| | ||| || || | || || | | | ||| | | | || || ||| | | | || | ||| || || || || | || || | || || | ||| || || || || || | || || | | || || || || || | || | || | | || | | || | | ||| | | | | || | | ||| ||| || | | ||| | || | || | || | | || | | | ||| || | || | ||| | || | | || | || || || || || || | || | || | | || || | | || || |||| || | | | || | | || ||| || || ||| | || | | ||| | |||| || | ||| ||| | || || || | | | || | || || | || | | | | | | || ||| || | | | ||| || || ||||| || || || || ||| | || || | | || | || | ||| |||| | || || | || | || | | || || | || || || || | | || || | | | || | | || | | | || | | | || | | || | || || || || | | | | ||| |||| || || || ||| || | || | | | || | |||||| | || | || ||| | | | || | ||| | | | || | || || || | || | | | | | ||| || ||| |||| || | || || | | | || || | | ||| | || | || | | ||| | | | || || ||| | || ||| || | || || | | || || | || || | | || || || || | || || ||| | ||||| || ||| | || | ||||| ||| || || | || | || || || | || | ||| || || || | || | || || || || | | || | ||| ||| | || | | | ||| | || | | || || | || || || | | | || |||| | ||| || ||| ||| | | || || | || || | || || | | ||| || || | || || | | || | || | | || | || | | | || || || || | || | | | | || | | || || | ||| ||| | ||| | | || | | | || | || | || || || || | || | || | | || | | || | || | || || | | || ||| || | | || | | |||| | || | || || ||| || || | || || | || || || | || | || | | | | || |||| || | || | || || | | || | || || | ||| | | ||| | ||| || | | || | |||| || | | || || || || || | || | ||| | || || || | ||| | | || || ||| | || | || | || || | || | | | ||| || || | || ||||| || ||| | | || | || || || || || | || | | ||| || | | || || | || | | ||| | | || | |||| ||| | | || || || | || | || | || | || | || || || ||| | || | || | ||| | ||| || || | | || | ||| || | || | || || | | | || | || || || | || || || || | | | || | || || | || || ||| | || | | | | | || | ||| || ||| | | || || | || | || | | | | | || | | | || || || | || || | | | ||| | || | ||| || | | | || | || || | | || ||| | || || | | ||| | || | | || | || | || || | || | | || | | || | | || | || || || || || || | |||| | | || | | || || | || ||| | || | | | || || | || || || || | ||| | || || | || || || | || | | || | | | || | | || || | | || | | | | || | | | || || | || | ||| || || | || || || | || | || || || || ||| || | || | | || || || | || | | || | || || || | | || || || || | | |||| || | | | | || | | || | | | || | | || |||| || | | || || || | || | ||| | || || || | || || || || | | || | || | || ||| | ||| |||| | || || | || || || || ||| | || | || | || | | | | || || | || | || ||| || | | | ||||| | | || ||| || | || | |||| || | || || || | || | ||||| || | || | | ||| | || | || || || || || | || || || | || | ||| || | | || | || | || | | |||| | | || | || | 0.0 0.0 500 2000 2500 1000 1500 0 500 1000 2000 2500 0 1500 Balance Balance FIGURE 4.2. Default data. Left: Estimated probabil- Classification using the using linear regression. Some estimated probabilities are negative! ity of default ( No or Yes ). Right: default The orange ticks indicate the 0/1 values coded for default using logistic regression. All probabilities lie Predicted probabilities of between 0 1 . and Default For the data, logistic regression models the probability of default. For example, the probability of default given balance can be written as default = Yes | balance . ) Pr( The values of Pr( default = Yes balance ), which we abbreviate | p ( balance ), will range between 0 and 1. Then for any given value of balance , default . For example, one might predict a prediction can be made for = 5. Alterna- for any individual for whom p ( balance ) > 0 . default Yes tively, if a company wishes to be conservative in predicting individuals who are at risk for default, then they may choose to use a lower threshold, such p ( balance ) as 0 . 1. > 4.3.1 The Logistic Model How should we model the relationship between p ( X )=Pr( Y =1 | X )and X ? (For convenience we are using the generic 0/1 coding for the response). In Section 4.2 we talked of using a linear regression model to represent these probabilities: p ( X )= β (4.1) + β X. 1 0 ,thenwe default = Yes using balance If we use this approach to predict obtain the model shown in the left-hand panel of Figure 4.2. Here we see the problem with this approach: for balances close to zero we predict a negative probability of default; if we were to predict for very large balances, we would get values bigger than 1. These predictions are not sensible, since of course the true probability of default, regardless of credit card balance, must fall between 0 and 1. This problem is not unique to the credit default data. Any time a straight line is fit to a binary response that is coded as

147 132 4. Classification p 0 or 1, in principle we can always predict ) < 0 for some values of X ( X ( X > 1 for others (unless the range of X is limited). and ) p X ) using a function that gives ( To avoid this problem, we must model p . Many functions meet this outputs between 0 and 1 for all values of X logistic function , description. In logistic regression, we use the logistic function + X β β 1 0 e ( )= (4.2) . X p X + β β 0 1 1+ e ,which To fit the model (4.2), we use a method called maximum likelihood maximum we discuss in the next section. The right-hand panel of Figure 4.2 illustrates likelihood the fit of the logistic regression model to the Default data. Notice that for low balances we now predict the probability of default as close to, but never below, zero. Likewise, for high balances we predict a default probability close to, but never above, one. The logistic function will always produce curve of this form, and so regardless of the value of X ,we an S-shaped will obtain a sensible prediction. We also see that the logistic model is better able to capture the range of probabilities than is the linear regression model in the left-hand plot. The average fitted probability in both cases is 0.0333 (averaged over the training data), which is the same as the overall proportion of defaulters in the data set. After a bit of manipulation of (4.2), we find that ( ) p X X β + β 1 0 = e . (4.3) ) X ( p − 1 The quantity p ( X ) / [1 − p ( X )] is called the odds ,andcantakeonanyvalue odds between 0 and . Values of the odds close to 0 and ∞ indicate very low ∞ and very high probabilities of default, respectively. For example, on average 1 in 5 people with an odds of 1 p ( X )=0 4 will default, since 2 implies an / . 0 . 2 4. Likewise on average nine out of every ten people with / =1 odds of . 0 − 1 2 9 . 0 )=0 an odds of 9 will default, since 9 implies an odds of X ( p . =9. − 1 9 . 0 Odds are traditionally used instead of probabilities in horse-racing, since they relate more naturally to the correct betting strategy. By taking the logarithm of both sides of (4.3), we arrive at ( ) p ( X ) X. + β (4.4) log = β 0 1 1 ( X p ) − The left-hand side is called the or logit . We see that the logistic log-odds log-odds regression model (4.2) has a logit that is linear in X . logit Recall from Chapter 3 that in a linear regression model, β gives the 1 Y associated with a one-unit increase in X . In contrast, average change in X by one unit changes the log odds in a logistic regression model, increasing β 1 (4.4), or equivalently it multiplies the odds by e (4.3). However, β by 1 because the relationship between p ( X )and X in (4.2) is not a straight line,

148 4.3 Logistic Regression 133 does correspond to the change in p ( X ) associated with a one-unit β not 1 . The amount that p X ) changes due to a one-unit change in X increase in ( . But regardless of the value of X , will depend on the current value of X X X is positive then increasing X will be associated with increasing p ), ( if β 1 β and if X will be associated with decreasing is negative then increasing 1 X p p ( X ) ( ). The fact that there is not a straight-line relationship between X and p ( X ) per unit change in X , and the fact that the rate of change in depends on the current value of , can also be seen by inspection of the X right-hand panel of Figure 4.2. 4.3.2 Estimating the Regression Coefficients The coefficients and β β in (4.2) are unknown, and must be estimated 1 0 based on the available training data. In Chapter 3, we used the least squares approach to estimate the unknown linear regression coefficients. Although we could use (non-linear) least squares to fit the model (4.4), the more is preferred, since it has better sta- maximum likelihood general method of tistical properties. The basic intuition behind using maximum likelihood and to fit a logistic regression model is as follows: we seek estimates for β 0 β p ( x ) of default for each individual, such that the predicted probability ˆ 1 i using (4.2), corresponds as closely as possible to the individual’s observed ˆ ˆ and β such that plugging default status. In other words, we try to find β 0 1 these estimates into the model for p ( X ), given in (4.2), yields a number close to one for all individuals who defaulted, and a number close to zero for all individuals who did not. This intuition can be formalized using a likelihood function mathematical equation called a : likelihood ∏ ∏ function ′ ,β . )= x ))  x ( p p ( ( β ) (4.5) − (1 i 0 1 i ′ =1 y : i : i =0 y ′ i i ˆ ˆ this likelihood function. and β maximize are chosen to The estimates β 1 0 Maximum likelihood is a very general approach that is used to fit many of the non-linear models that we examine throughout this book. In the linear regression setting, the least squares approach is in fact a special case of maximum likelihood. The mathematical details of maximum likelihood are beyond the scope of this book. However, in general, logistic regression and other models can be easily fit using a statistical software package such , and so we do not need to concern our selves with the details of the R as maximum likelihood fitting procedure. ates and related information that Table 4.1 shows the coefficient estim Default data in order result from fitting a logistic regression model on the ˆ to predict the probability of = using balance .Weseethat default β = Yes 1 is associated with an balance 0055; this indicates that an increase in 0 . increase in the probability of default . To be precise, a one-unit increase in balance is associated with an increase in the log odds of default by 0 . 0055 units.

149 134 4. Classification Coefficient Std. error Z-statistic P-value Intercept − 29.5 < 0.0001 − 10.6513 0.3612 24.9 0.0001 0.0002 < balance 0.0055 data, estimated coefficients of the logistic regres- TABLE 4.1. For the Default balance .Aone-unit using default sion model that predicts the probability of is associated with an increase in the log odds of default by increase in balance 0055 0 . units. Many aspects of the logistic regression output shown in Table 4.1 are similar to the linear regression output of Chapter 3. For example, we can measure the accuracy of the coefficient estimates by computing their stan- dard errors. The z t -statistic -statistic in Table 4.1 plays the same role as the in the linear regression output, for example in Table 3.1 on page 68. For ˆ ˆ ), and so a is equal to β β /SE ( β z instance, the -statistic associated with 1 1 1 z -statistic indicates evidence against the null large (absolute) value of the β 0 e H hypothesis = 0. This null hypothesis implies that p ( X )= β : — 1 0 β 0 1+ e default balance . does not depend on in other words, that the probability of Since the p-value associated with balance in Table 4.1 is tiny, we can reject ere is indeed an association between . In other words, we conclude that th H 0 and probability of . The estimated intercept in Table 4.1 balance default is typically not of interest; its main purpose is to adjust the average fitted probabilities to the proportion of ones in the data. 4.3.3 Making Predictions Once the coefficients have been estimated, it is a simple matter to compute the probability of for any given credit card balance. For example, default using the coefficient estimates given in Table 4.1, we predict that the default balance of $1 , 000 is probability for an individual with a ˆ ˆ β × 0055 . − 000 6513+0 + , β . X 10 1 1 0 e e 00576 . =0 = , p ˆ )= X ( ˆ ˆ × 000 , 1 0055 . − 10 . 6513+0 β + X β 1 0 e 1+ e 1+ which is below 1 %. In contrast, the predicted probability of default for an individual with a balance of $2 , 000 is much higher, and equals 0 . 586 or 58 6%. . One can use qualitative predictors with the logistic regression model using the dummy variable approach from Section 3.3.1. As an example, the Default data set contains the qualitative variable student .Tofitthe model we simply create a dummy variable that takes on a value of 1 for students and 0 for non-students. The logistic regression model that results from predicting probability of default from student status can be seen in Table 4.2. The coefficient associated with the dummy variable is positive,

150 4.3 Logistic Regression 135 Coefficient Std. error Z-statistic P-value 3.5041 − 49.55 < 0.0001 0.0707 Intercept − 3.52 0.0004 0.1150 student[Yes] 0.4049 data, estimated coefficients of the logistic regres- TABLE 4.2. For the Default using student status. Student default sion model that predicts the probability of for a student and a value status is encoded as a dummy variable, with a value of 1 for a non-student, and represented by the variable of in the table. 0 student[Yes] and the associated p-value is statistically significant. This indicates that students tend to have higher default probabilities than non-students: 5041+0 − 3 . 1 . 4049 × e ̂ Pr( = Yes | student = Yes )= , default 0431 . =0 5041+0 . 1 . 3 − × 4049 e 1+ 5041+0 − 3 . . 4049 × 0 e ̂ Pr( | student = 0292 )= default = . Yes =0 . No 5041+0 . 4049 × 0 − 3 . e 1+ 4.3.4 Multiple Logistic Regression We now consider the problem of predicting a binary response using multiple predictors. By analogy with the extension from simple to multiple linear regression in Chapter 3, we can generalize (4.4) as follows: ( ) ( X ) p log , X (4.6) + β β X + + ··· β = 1 1 p p 0 1 − p ( X ) where X X =( ,...,X )are p predictors. Equation 4.6 can be rewritten as p 1 β + ··· + β X + β X 1 1 0 p p e ( p )= X . (4.7) + β X + ··· + β X β p 1 p 0 1 e 1+ Just as in Section 4.3.2, we use the maximum likelihood method to estimate ,β ,...,β . β 1 p 0 es for a logistic regression model Table 4.3 shows the coefficient estimat balance , income (in thousands of dollars), and student status to that uses predict probability of default . There is a surprising result here. The p- and the dummy variable for student status balance values associated with are very small, indicating that each of these variables is associated with the probability of default . However, the coefficient for the dummy variable is negative, indicating that students are less likely to default than non- students. In contrast, the coefficient f or the dummy variable is positive in Table 4.2. How is it possible for student status to be associated with an increase in probability of default in Table 4.2 and a decrease in probability of default in Table 4.3? The left-hand panel of Figure 4.3 provides a graph- ical illustration of this apparent paradox. The orange and blue solid lines show the average default rates for students and non-students, respectively,

151 136 4. Classification Coefficient Std. error Z-statistic P-value Intercept 0.4923 − 22.08 < 0.0001 − 10.8690 0.0002 0.0001 < 0.0057 balance 24.74 0.0082 0.37 0.7115 0.0030 income − 0.6468 0.2362 − 2.74 0.0062 student[Yes] TABLE 4.3. Default data, estimated coefficients of the logistic regres- For the sion model that predicts the probability of using balance , income ,and default student[Yes] student status. Student status is encoded as a dummy variable , 1 for a student and a value of 0 for a non-student. In fitting this with a value of income was measured in thousands of dollars. model, as a function of credit card balance. The negative coefficient for student in the multiple logistic regression indicates that for a fixed value of balance and , a student is less likely to default than a non-student. Indeed, income we observe from the left-hand panel of Figure 4.3 that the student default rate is at or below that of the non-student default rate for every value of . But the horizontal broken lines near the base of the plot, which balance show the default rates for students and non-students averaged over all val- balance and income , suggest the opposite effect: the overall student ues of default rate is higher than the non-student default rate. Consequently, there is a positive coefficient for student in the single variable logistic regression output shown in Table 4.2. The right-hand panel of Figure 4.3 provides an explanation for this dis- and balance are correlated. Students tend student crepancy. The variables to hold higher levels of debt, which is in turn associated with higher prob- ability of default. In other words, students are more likely to have large credit card balances, which, as we k now from the left-hand panel of Fig- gh default rates. Thus, even though ure 4.3, tend to be associated with hi an individual student with a given credit card balance will tend to have a lower probability of default than a non-student with the same credit card balance, the fact that students on the whole tend to have higher credit card balances means that overall, students tend to default at a higher rate than non-students. This is an important distinction for a credit card company that is trying to determine to whom they should offer credit. A student is riskier than a non-student if no information about the student’s credit card balance is available. However, that student is less risky than a non-student with the same credit card balance ! This simple example illustrates the dangers and subtleties associated with performing regressions involving only a single predictor when other predictors may also be relevant. As in the linear regression setting, the results obtained using one predictor may be quite different from those ob- tained using multiple predictors, esp ecially when there is correlation among the predictors. In general, the phenomenon seen in Figure 4.3 is known as confounding . confounding

152 4.3 Logistic Regression 137 0.8 2000 2500 0.6 1500 0.4 1000 Default Rate 0.2 Credit Card Balance 500 0 0.0 500 1500 Yes No 1000 2000 Student Status Credit Card Balance FIGURE 4.3. data. Left: Default rates are shown Default Confounding in the for students (orange) and non-students (blue). The solid lines display default rate balance as a function of , while the horizontal broken lines display the overall Boxplots of balance for students (orange) and non-students default rates. Right: (blue) are shown. By substituting estimates for the regression coefficients from Table 4.3 into (4.7), we can make predictions. For example, a student with a credit card balance of $1 , 500 and an income of $40 , 000 has an estimated proba- bility of default of × 6468 . 0 − 1 10 . 869+0 . 00574 × 1 , 500+0 . 003 × 40 − e )= ( p ˆ X (4.8) =0 . 058 . 40 00574 × 1 , 500+0 . 003 × − − 0 . 6468 × 1 10 . 869+0 . 1+ e A non-student with the same balance and income has an estimated prob- ability of default of 0 × 6468 . 0 − − 10 . 869+0 . 00574 × 1 , 500+0 . 003 × 40 e p ˆ )= ( X . =0 . 105 (4.9) . 00574 × 1 , 500+0 . 003 × 40 − 0 . 6468 × 0 − 10 . 869+0 e 1+ income coefficient estimate from Table 4.3 by 40, (Here we multiply the rather than by 40,000, because in that table the model was fit with income measured in units of $1 , 000.) > 2 Response Classes 4.3.5 Logistic Regression for We sometimes wish to classify a response variable that has more than two classes. For example, in Section 4.2 we had three categories of medical con- stroke , drug overdose , epileptic seizure . dition in the emergency room: In this setting, we wish to model both Pr( = Y stroke | X )andPr( Y = | X ), with the remaining Pr( Y = epileptic seizure | X )= drug overdose Y stroke X ) − Pr( | = drug overdose | X ). The two-class logis- Pr( Y = 1 − tic regression models discussed in the previous sections have multiple-class extensions, but in practice they tend not to be used all that often. One of the reasons is that the method w e discuss in the next section, discriminant

153 138 4. Classification , is popular for multiple-class classification. So we do not go into analysis the details of multiple-class logistic regression here, but simply note that such an approach is possible, and that software for it is available in . R 4.4 Linear Discriminant Analysis = k | X = x )usingthe Logistic regression involves directly modeling Pr( Y logistic function, given by (4.7) for the case of two response classes. In Y statistical jargon, we model the conditional distribution of the response , . We now consider an alternative and less direct X given the predictor(s) approach to estimating these probabilities. In this alternative approach, we model the distribution of the predictors X separately in each of the Y ), and then use Bayes’ theorem to flip these response classes (i.e. given Y around into estimates for Pr( k | X = x ). When these distributions are = assumed to be normal, it turns out that the model is very similar in form to logistic regression. Why do we need another method, when we have logistic regression? There are several reasons: • When the classes are well-separated, the parameter estimates for the logistic regression model are surprisingly unstable. Linear discrimi- nant analysis does not suffer from this problem. • n is small and the distribution of the predictors X is approximately If normal in each of the classes, the linear discriminant model is again more stable than the logistic regression model. • As mentioned in Section 4.3.5, linear discriminant analysis is popular when we have more than two response classes. 4.4.1 Using Bayes’ Theorem for Classification Suppose that we wish to classify an observation into one of K classes, where K ≥ Y can take on K 2. In other words, the qualitative response variable prior represent the overall or π possible distinct and unordered values. Let k prior probability that a randomly chosen observation comes from the th class; k this is the probability that a given observation is associated with the k th )denote k = Y | ( X ) ≡ Pr( X = x Y category of the response variable .Let f k density function of X for an observation that comes from the k th class. the density In other words, f ( x ) is relatively large if there is a high probability that function k ,and an observation in the th class has X ≈ x k f ( x ) is small if it is very k

154 4.4 Linear Discriminant Analysis 139 k unlikely that an observation in the ≈ x .Then Bayes’ th class has X states that theorem Bayes’ theorem ) x ( f π k k k Pr( )= x = X Y | = . (4.10) ∑ K π ( f x ) l l l =1 ( X ) In accordance with our earlier notation, we will use the abbreviation p k = k | X ). This suggests that instead of directly computing =Pr( Y p ( ) X k π as in Section 4.3.1, we can simply plug in estimates of and ( X )into f k k is easy if we have a random sample of π (4.10). In general, estimating k s from the population: we simply compute the fraction of the training Y f observations that belong to the k th class. However, estimating ( X ) tends k to be more challenging, unless we assume some simple forms for these ( x )asthe posterior probability that an observation p densities. We refer to k posterior x belongs to the k th class. That is, it is the probability that the X = th class, given the predictor value for that k observation belongs to the observation. We know from Chapter 2 that the Bayes classifier, which classifies an ( X ) is largest, has the lowest possible observation to the class for which p k error rate out of all classifiers. (This is of course only true if the terms in (4.10) are all correctly specified.) Therefore, if we can find a way to ( X ), then we can develop a classifier that approximates the estimate f k Bayes classifier. Such an approach is the topic of the following sections. 4.4.2 Linear Discriminant Analysis for =1 p For now, assume that p = 1—that is, we have only one predictor. We would like to obtain an estimate for f x ) that we can plug into (4.10) in ( k ( x ). We will then classify an observation to the class p order to estimate k for which p ), we will first make ( x ) is greatest. In order to estimate f x ( k k some assumptions about its form. x )is normal or Gaussian .Intheone- ( Suppose we assume that f k normal dimensional setting, the normal density takes the form Gaussian ( ) 1 1 2 √ f )= ) exp − − ( μ x x ( , (4.11) k k 2 σ 2 πσ 2 k k 2 where μ σ k th class. are the mean and variance parameters for the and k k 2 2 For now, let us further assume that σ : that is, there is a shared ... = σ = 1 K variance term across all K classes, which for simplicity we can denote by 2 . Plugging (4.11) into (4.10), we find that σ ) ( 1 1 2 √ π − x exp ( − μ ) 2 k k σ 2 πσ 2 p ( x )= . (4.12) ( ) ∑ k K 1 1 2 √ ( exp π μ − − x ) 2 l l l =1 σ 2 πσ 2 (Note that in (4.12), π denotes the prior probability that an observation k belongs to the k th class, not to be confused with π ≈ 3 . 14159, the math- ematical constant.) The Bayes classifier involves assigning an observation

155 140 4. Classification 012345 34 −2 −1 2 01 −3 2 0 4 −4 −2 Left: FIGURE 4.4. Two one-dimensional normal density functions are shown. 20 obser- Right: The dashed vertical line represents the Bayes decision boundary. vations were drawn from each of the two classes, and are shown as histograms. The Bayes decision boundary is again shown as a dashed vertical line. The solid vertical line represents the LDA decision boundary estimated from the training data. x to the class for which (4.12) is largest. Taking the log of (4.12) X = and rearranging the terms, it is not hard to show that this is equivalent to assigning the observation to the class for which 2 μ μ k k (4.13) )= ( − x · + log( ) x π δ k k 2 2 σ 2 σ is largest. For instance, if K =2and π π = , then the Bayes classifier 2 1 2 2 μ ,andtoclass − μ − ) >μ ( x μ assigns an observation to class 1 if 2 2 1 1 2 2 otherwise. In this case, the Bayes decision boundary corresponds to the point where 2 2 μ μ − μ μ + 2 1 2 1 . = (4.14) = x − μ μ ) 2( 2 2 1 An example is shown in the left-hand panel of Figure 4.4. The two normal density functions that are displayed, f x )and f ( ( x ), represent two distinct 1 2 classes. The mean and variance parameters for the two density functions 2 2 = = 1. The two densities overlap, − . 25, μ = =1 . 25, and σ σ 1 μ are 2 1 1 2 and so given that X = x , there is some uncertainty about the class to which the observation belongs. If we assume that an observation is equally likely = π 5—then by inspection of =0 . π to come from either class—that is, 2 1 (4.14), we see that the Bayes classifier assigns the observation to class 1 x< 0 and class 2 otherwise. Note that in this case, we can compute if X is drawn from a Gaussian the Bayes classifier because we know that distribution within each class, and we know all of the parameters involved. In a real-life situation, we are not able to calculate the Bayes classifier. In practice, even if we are quite certain of our assumption that X is drawn from a Gaussian distribution within each class, we still have to estimate 2 .The linear discriminant ,...,μ σ , π ,and ,...,π the parameters μ K K 1 1

156 4.4 Linear Discriminant Analysis 141 (LDA) method approximates the Bayes classifier by plugging esti- analysis linear 2 into (4.13). In particular, the following estimates μ mates for ,and σ , π discriminant k k analysis are used: ∑ 1 = x μ ˆ k i n k k y : i = i K ∑ ∑ 1 2 2 ) x = μ ˆ ˆ − (4.15) σ ( i k K − n i =1 k = k y : i where n n is the total number of training observations, and is the number k is simply the of training observations in the th class. The estimate for μ k k 2 th class, while ˆ k average of all the training observations from the σ can be seen as a weighted average of the sample variances for each of the K classes. Sometimes we have knowledge of the class membership probabili- ,...,π , which can be used directly. In the absence of any additional π ties K 1 π information, LDA estimates using the proportion of the training obser- k k vations that belong to the th class. In other words, n /n. = (4.16) ˆ π k k The LDA classifier plugs the estimates given in (4.15) and (4.16) into (4.13), and assigns an observation X = x to the class for which 2 μ ˆ ˆ μ k k ˆ δ · ) (4.17) π − x )= x ( + log(ˆ k k 2 2 σ 2ˆ σ ˆ linear in the classifier’s name stems from the fact is largest. The word ˆ ( x ) in (4.17) are linear functions of x (as discriminant functions δ that the k discriminant opposed to a more complex function of x ). function The right-hand panel of Figure 4.4 displays a histogram of a random sample of 20 observations from each class. To implement LDA, we began 2 ,and σ , μ using (4.15) and (4.16). We then computed the π by estimating k k decision boundary, shown as a black solid line, that results from assigning an observation to the class for which (4.17) is largest. All points to the left of this line will be assigned to the green class, while points to the right of = 20, = n n this line are assigned to the purple class. In this case, since 2 1 =ˆ π . As a result, the decision boundary corresponds to the we have ˆ π 2 1 midpoint between the sample means for the two classes, (ˆ μ +ˆ μ ) / 2. The 2 1 figure indicates that the LDA decision boundary is slightly to the left of 2= + μ ) / the optimal Bayes decision boundary, which instead equals ( μ 1 2 0. How well does the LDA classifier perform on this data? Since this is simulated data, we can generate a larg e number of test observations in order to compute the Bayes error rate and the LDA test error rate. These are 10 . 6% and 11 . 1 %, respectively. In other words, the LDA classifier’s error rate is only 0 . 5 % above the smallest possible error rate! This indicates that LDA is performing pretty well on this data set.

157 142 4. Classification 2 2 x x x x 1 1 FIGURE 4.5. Two multivariate Gaussian density functions are shown, with . Left: The two predictors are uncorrelated. p The two variables have =2 Right: 0 7 . a correlation of . To reiterate, the LDA classifier resul ts from assuming that the observa- tions within each class come from a normal distribution with a class-specific 2 , and plugging estimates for these σ mean vector and a common variance parameters into the Bayes classifier. In Section 4.4.4, we will consider a less stringent set of assumptions, by allowing the observations in the th class k 2 . to have a class-specific variance, σ k p> 1 4.4.3 Linear Discriminant Analysis for We now extend the LDA classifier to the case of multiple predictors. To do this, we will assume that X =( X ,X ,...,X )isdrawnfroma multi- p 1 2 variate Gaussian (or multivariate normal) distribution, with a class-specific multivariate mean vector and a common covariance matrix. We begin with a brief review Gaussian of such a distribution. The multivariate Gaussian distribution assumes that each individual pre- dictor follows a one-dimensional normal distribution, as in (4.11), with some correlation between each pair of predic tors. Two examples of multivariate p Gaussian distributions with = 2 are shown in Figure 4.5. The height of the surface at any particular point represents the probability that both X 1 and X fall in a small region around that point. In either panel, if the sur- 2 axis, the resulting cross-section X axis or along the face is cut along the X 1 2 will have the shape of a one-dimensional normal distribution. The left-hand panel of Figure 4.5 illustrates an example in which Var( X )and )=Var( X 2 1 ,X ) = 0; this surface has a characteristic bell shape . However, the Cor( X 1 2 bell shape will be distorted if the predictors are correlated or have unequal variances, as is illustrated in the right-hand panel of Figure 4.5. In this situation, the base of the bell will have an elliptical, rather than circular,

158 4.4 Linear Discriminant Analysis 143 4 4 2 2 2 2 X X 0 0 −2 −2 −4 −4 0 2 −2 4 −4 −4 4 2 0 −2 X X 1 1 FIGURE 4.6. An example with three classes. The observations from each class =2 , with a class-spe- are drawn from a multivariate Gaussian distribution with p Left: Ellipses that contain cific mean vector and a common covariance matrix. 95 % of the probability for each of the three classes are shown. The dashed lines are the Bayes decision boundaries. observations were generated from Right: 20 each class, and the corresponding LDA decision boundaries are indicated using solid black lines. The Bayes decision boundaries are once again shown as dashed lines. -dimensional random variable X has a multi- p shape. To indicate that a X ∼ N ( μ, Σ ). Here E ( X )= μ is variate Gaussian distribution, we write X (a vector with X )= Σ is the the mean of p components), and Cov( covariance matrix of X . Formally, the multivariate Gaussian density × p p is defined as ) ( 1 1 1 − T ( x − μ ) − − x (4.18) ( ) μ exp . Σ )= x ( f 2 1 2 p/ / 2 | (2 | Σ π ) In the case of p> 1 predictors, the LDA classifier assumes that the k th class are drawn from a multivariate Gaussian dis- observations in the tribution N ( μ , Σ ), where μ Σ is a class-specific mean vector, and is a k k K covariance matrix that is common to all classes. Plugging the density ( X = x ), into (4.10) and performing a little th class, f function for the k k bit of algebra reveals that the Bayes classifier assigns an observation X = x to the class for which 1 1 − T 1 T − μ x Σ − )= (4.19) δ μ Σ ( x μ +log π k k k k k 2 is largest. This is the vector/matrix version of (4.13). An example is shown in the left-hand panel of Figure 4.6. Three equally- sized Gaussian classes are shown with class-specific mean vectors and a common covariance matrix. The three ellipses represent regions that con- tain 95 % of the probability for each of the three classes. The dashed lines

159 144 4. Classification are the Bayes decision boundaries. In other words, they represent the set of values x for which δ ); i.e. )= ( x x δ (  k 1 1 T − T 1 − 1 − T − 1 T 1 μ μ x = x Σ Σ μ (4.20) Σ − − μ Σ μ μ l k k l l k 2 2 term from (4.19) has disappeared because each of = l k π for  .(Thelog k is the three classes has the same number of training observations; i.e. π k re are three lines representing the the same for each class.) Note that the pairs of classes among Bayes decision boundaries because there are three the three classes. That is, one Bayes decision boundary separates class 1 m class 3, and one separates class 2 from class 2, one separates class 1 fro from class 3. These three Bayes decision boundaries divide the predictor space into three regions. The Bayes classifier will classify an observation according to the region in which it is located. , ,...,μ μ Once again, we need to estimate the unknown parameters 1 K π ,...,π ,and Σ ; the formulas are similar to those used in the one- 1 K X x , dimensional case, given in (4.15). To assign a new observation = LDA plugs these estimates into (4.19) and classifies to the class for which ˆ x ( x ) is largest. Note that in (4.19) ;thatis, ) is a linear function of ( x δ δ k k x only through a linear combination of the LDA decision rule depends on its elements. Once again, this is the reason for the word in LDA. linear In the right-hand panel of Figure 4.6, 20 observations drawn from each of the three classes are displayed, and the resulting LDA decision boundaries are shown as solid black lines. Overall, the LDA decision boundaries are pretty close to the Bayes decision boundaries, shown again as dashed lines. The test error rates for the Bayes and LDA classifiers are 0 . 0746 and 0 . 0770, respectively. This indicates that LDA is performing well on this data. data in order to predict whether Default We can perform LDA on the or not an individual will default on the basis of credit card balance and student status. The LDA model fit to the 10 000 training samples results , training error rate of 2 . 75 %. This sounds like a low error rate, but two in a caveats must be noted. • First of all, training error rates will usually be lower than test error rates, which are the real quantit y of interest. In other words, we might expect this classifier to perform worse if we use it to predict whether or not a new set of individuals will default. The reason is that we specifically adjust the parameters of our model to do well on the training data. The higher the ratio of parameters p to number of samples , the more we expect this overfitting to play a role. For n overfitting these data we don’t expect this to be a problem, since p =4and n =10 , 000. • Second, since only 3 . 33 % of the individuals in the training sample defaulted, a simple but useless classifier that always predicts that

160 4.4 Linear Discriminant Analysis 145 True default status No Yes Total 9 644 252 9 , 896 No , Predicted 23 81 104 Yes default status , 667 333 10 9 000 Total , A confusion matrix compares the LDA predictions to the true de- TABLE 4.4. 10 fault statuses for the 000 training observations in the Default data set. Ele- , ments on the diagonal of the matrix represent individuals whose default statuses were correctly predicted, while off-diagonal elements represent individuals that were misclassified. LDA made incorrect predictions for individuals who did 23 not default and for 252 individuals who did default. each individual will not default, regardless of his or her credit card . 33 %. In balance and student status, will result in an error rate of 3 null classifier will achieve an error rate that other words, the trivial null is only a bit higher than the LDA training set error rate. In practice, a binary classifier such as this one can make two types of errors: it can incorrectly assign an individual who defaults to the no default category, or it can incorrectly assign an individual who does not default to the default category. It is often of interest to determine which of these two Default types of errors are being made. A confusion matrix , shown for the confusion data in Table 4.4, is a convenient way to display this information. The matrix table reveals that LDA predicted that a total of 104 people would default. Of these people, 81 actually defaulted and 23 did not. Hence only 23 out of 9 , 667 of the individuals who did not default were incorrectly labeled. This looks like a pretty low error rate! However, of the 333 individuals who . 7 %) were missed by LDA. So while the overall error defaulted, 252 (or 75 rate is low, the error rate among individuals who defaulted is very high. From the perspective of a credit card company that is trying to identify high-risk individuals, an error rate of 252 / 333 = 75 . 7 % among individuals who default may well be unacceptable. Class-specific performance is also important in medicine and biology, where the terms sensitivity and specificity characterize the performance of sensitivity a classifier or screening test. In this ca se the sensitivity is the percentage of specificity true defaulters that are identified, a low 24.3 % in this case. The specificity is the percentage of non-defaulters that are correctly identified, here (1 − 23 / 9 , 667) × 100 = 99 . 8%. Why does LDA do such a poor job of classifying the customers who de- fault? In other words, why does it have such a low sensitivity? As we have seen, LDA is trying to approximate the Bayes classifier, which has the low- est total error rate out of all classifiers (if the Gaussian model is correct). That is, the Bayes classifier will yield the smallest possible total number of misclassified observations, irresp ective of which class the errors come from. That is, some misclassifications will result from incorrectly assigning

161 146 4. Classification True default status No Yes Total 9 432 138 9 , 570 No , Predicted 235 195 430 Yes default status , 667 333 10 9 000 Total , A confusion matrix compares the LDA predictions to the true de- TABLE 4.5. 10 , 000 training observations in the fault statuses for the data set, using Default a modified threshold value that predicts default for any individuals whose posterior 20 %. default probability exceeds a customer who does not default to th e default class, and others will re- sult from incorrectly assigning a cust omer who defaults to the non-default class. In contrast, a cred it card company might particularly wish to avoid incorrectly classifying an individual who will default, whereas incorrectly classifying an individual who will not default, though still to be avoided, is less problematic. We will now see that it is possible to modify LDA in order to develop a classifier that be tter meets the credit card company’s needs. The Bayes classifier works by assigning an observation to the class for ( ) is greatest. In the two-class case, this X which the posterior probability p k amounts to assigning an observation to the default class if Pr( default = Yes | X = x ) > 0 . 5 . (4.21) Thus, the Bayes classifier, and by extension LDA, uses a threshold of 50 % for the posterior probability of default in order to assign an observation default cerned about incorrectly pre- to the class. However, if we are con dicting the default status for individuals who default, then we can consider lowering this threshold. For instance, we might label any customer with a posterior probability of default above 20 % to the class. In other default words, instead of assigning an observation to the default class if (4.21) holds, we could instead assign an observation to this class if P ( default = Yes | X = x ) > 0 . 2 . (4.22) The error rates that result from taking this approach are shown in Table 4.5. Now LDA predicts that 430 individuals will default. Of the 333 individuals who default, LDA correctly predicts all but 138, or 41 . 4%.Thisisavast improvement over the error rate of 75 . 7 % that resulted from using the threshold of 50 %. However, this improvement comes at a cost: now 235 individuals who do not default are incorrectly classified. As a result, the overall error rate has increased slightly to 3 . 73 %. But a credit card company may consider this slight increase in the total error rate to be a small price to pay for more accurate identification of individuals who do indeed default. Figure 4.7 illustrates the trade-off that results from modifying the thresh- old value for the posterior probability of default. Various error rates are

162 4.4 Linear Discriminant Analysis 147 0.6 0.4 Error Rate 0.2 0.0 0.1 0.5 0.2 0.3 0.4 0.0 Threshold For the Default data set, error rates are shown as a function of FIGURE 4.7. the threshold value for the posterior probability that is used to perform the assign- ment. The black solid line displays the overall error rate. The blue dashed line represents the fraction of defaulting customers that are incorrectly classified, and the orange dotted line indicates the fraction of errors among the non-defaulting customers. shown as a function of the threshold value. Using a threshold of 0 5, as in . (4.21), minimizes the overall error rate, shown as a black solid line. This . 5andis is to be expected, since the Bayes classifier uses a threshold of 0 . 5is known to have the lowest overall error rate. But when a threshold of 0 used, the error rate among the individuals who default is quite high (blue dashed line). As the threshold is reduced, the error rate among individuals who default decreases steadily, but the error rate among the individuals who do not default increases. How can we decide which threshold value is best? Such a decision must be based on domain knowledge , such as detailed information about the costs associated with default. The is a popular graphic for simultaneously displaying the ROC curve ROC curve two types of errors for all possible thresholds. The name “ROC” is his- receiver toric, and comes from communications theory. It is an acronym for . Figure 4.8 displays the ROC curve for the LDA operating characteristics classifier on the training data. The overall performance of a classifier, sum- marized over all possible thresholds, is given by the area under the (ROC) curve (AUC). An ideal ROC curve will hug the top left corner, so the larger area under the AUC the better the classifier. For this data the AUC is 0 . 95, which is the (ROC) curve close to the maximum of one so would be considered very good. We expect a classifier that performs no better than chance to have an AUC of 0.5 (when evaluated on an independent test set not used in model training). ROC curves are useful for comparing different classifiers, since they take into account all possible thresholds. It turns out that the ROC curve for the logistic regression model of Section 4.3.4 fit to these data is virtually indis- tinguishable from this one for the LDA model, so we do not display it here. As we have seen above, varying the classifier threshold changes its true positive and false positive rate. These are also called the sensitivity and one sensitivity

163 148 4. Classification ROC Curve 1.0 0.8 0.6 0.4 True positive rate 0.2 0.0 0.0 1.0 0.8 0.6 0.4 0.2 False positive rate A ROC curve for the LDA classifier on the FIGURE 4.8. data. It Default traces out two types of error as we vary the threshold value for the posterior probability of default. The actual thresholds are not shown. The true positive rate is the sensitivity: the fraction of defaulters that are correctly identified, using a given threshold value. The false positive rate is 1-specificity: the fraction of non-defaulters that we classify incorrectly as defaulters, using that same threshold value. The ideal ROC curve hugs the top left corner, indicating a high true positive rate and a low false positive rate. The dotted line represents the “no information” classifier; this is what we would expect if student status and credit card balance are not associated with probability of default. Predicted class − or Null +orNon-null Total True − or Null True Neg. (TN) False Pos. (FP) N True Pos. (TP) P False Neg. (FN) class +orNon-null ∗ ∗ P Total N TABLE 4.6. Possible results when applying a classifier or diagnostic test to a population. minus the specificity of our classifier. Since there is an almost bewildering specificity array of terms used in this context, we now give a summary. Table 4.6 shows the possible results when applying a classifier (or diagnostic test) to a population. To make the connection with the epidemiology literature, we think of “+” as the “disease” that we are trying to detect, and “ − ”as the “non-disease” state. To make the connection to the classical hypothesis testing literature, we think of “ − ” as the null hypothesis and “+” as the Default data, “+” alternative (non-null) hypothesis. In the context of the indicates an individual who defaults, and “ − ” indicates one who does not.

164 4.4 Linear Discriminant Analysis 149 Synonyms Definition Name N − Specificity False Pos. rate FP Type I error, 1 / TP 1 − Type II error, power, sensitivity, recall / P True Pos. rate ∗ P Precision, 1 TP − false discovery proportion / Pos. Pred. value ∗ TN N / Neg. Pred. value TABLE 4.7. Important measures for classification and diagnostic testing, derived from quantities in Table 4.6. Table 4.7 lists many of the popular performance measures that are used in this context. The denominators for the false positive and true positive rates are the actual population counts in each class. In contrast, the denominators for the positive predictive value and the negative predictive value are the total predicted counts for each class. 4.4.4 Quadratic Discriminant Analysis As we have discussed, LDA assumes that the observations within each class are drawn from a multivariate Gaussian distribution with a class- K specific mean vector and a covariance matrix that is common to all Quadratic discriminant analysis (QDA) provides an alternative classes. quadratic approach. Like LDA, the QDA classifier results from assuming that the discriminant analysis observations from each class are drawn from a Gaussian distribution, and plugging estimates for the parameters into Bayes’ theorem in order to per- form prediction. However, unlike LDA, QDA assumes that each class has its own covariance matrix. That is, it assumes that an observation from the is a covariance matrix Σ ), where Σ , μ k N th class is of the form ∼ X ( k k k for the k th class. Under this assumption, the Bayes classifier assigns an X = x to the class for which observation 1 − 1 T ( x − μ π ) + log ( x − ) )= − Σ μ x ( δ k k k k k 2 1 1 − 1 − 1 1 − T T T x μ Σ Σ = π − +log − (4.23) μ x + x Σ μ k k k k k k k 2 2 is largest. So the QDA classifier involves plugging estimates for Σ , μ , k k and π into (4.23), and then assigning an observation = x to the class X k x appears for which this quantity is largest. Unlike in (4.19), the quantity quadratic function in (4.23). This is where QDA gets its name. as a Why does it matter whether or not we assume that the K classes share a common covariance matrix? In other words, why would one prefer LDA to QDA, or vice-versa? The answer lies in the bias-variance trade-off. When there are p predictors, then estimating a covariance matrix requires esti- mating ( p +1) / 2 parameters. QDA estimates a separate covariance matrix p for each class, for a total of Kp ( p +1) / 2 parameters. With 50 predictors this is some multiple of 1,225, which is a lot of parameters. By instead assum- ing that the K classes share a common covariance matrix, the LDA model

165 150 4. Classification 2 2 1 1 0 0 2 2 X X −1 −1 −2 −2 −3 −3 −4 −4 −4 4 −2 0 2 4 −4 2 −2 0 X X 1 1 The Bayes (purple dashed), LDA (black dotted), and QDA Left: FIGURE 4.9. Σ .The = (green solid) decision boundaries for a two-class problem with Σ 1 2 shading indicates the QDA decision rule. Since the Bayes decision boundary is Right: Details linear, it is more accurately approximated by LDA than by QDA.  = Σ . Since the Bayes decision are as given in the left-hand panel, except that Σ 2 1 boundary is non-linear, it is more accurately approximated by QDA than by LDA. becomes linear in Kp linear coefficients to esti- x , which means there are mate. Consequently, LDA is a much less flexible classifier than QDA, and so has substantially lower variance. This can potentially lead to improved prediction performance. But there is a trade-off: if LDA’s assumption that the K classes share a common covariance matrix is badly off, then LDA can suffer from high bias. Roughly speaking, LDA tends to be a better bet than QDA if there are relatively few training observations and so reducing variance is crucial. In contrast, QDA is recommended if the training set is very large, so that the variance of the classifier is not a major concern, or if K classesisclearly the assumption of a common covariance matrix for the untenable. Figure 4.9 illustrates the performances of LDA and QDA in two scenarios. In the left-hand panel, the two Gaussian classes have a common correla- . As a result, the Bayes decision boundary and X X tion of 0 . 7 between 2 1 is linear and is accurately approximated by the LDA decision boundary. The QDA decision boundary is inferior, because it suffers from higher vari- ance without a corresponding decrease in bias. In contrast, the right-hand panel displays a situation in which the orange class has a correlation of 0 . 7 between the variables and the blue class has a correlation of − 0 . 7. Now the Bayes decision boundary is quadratic, and so QDA more accurately approximates this boundary than does LDA.

166 4.5 A Comparison of Classification Methods 151 4.5 A Comparison of Classification Methods In this chapter, we have considered three different classification approaches: logistic regression, LDA, and QDA. In Chapter 2, we also discussed the od. We now consider the types of -nearest neighbors (KNN) meth K scenarios in which one approach might dominate the others. Though their motivations differ, the logistic regression and LDA methods r the two-class setting with p =1predictor, are closely connected. Conside ) be the probabilities that the observation ( x )and p x ( x )=1 − p ( and let p 1 2 1 x pectively. In the LDA framework, = X belongs to class 1 and class 2, res we can see from (4.12) to (4.13) (and a bit of simple algebra) that the log odds is given by ( ) ( ) p p ) x ( ( x ) 1 1 log x, (4.24) + c c = =log 1 0 x ) ( p − x 1 p ) ( 2 1 2 and c . From (4.4), we know that σ μ ,μ ,and are functions of where c 1 1 0 2 in logistic regression, ( ) p 1 log β (4.25) x. + = β 1 0 p − 1 1 x . Hence, both logistic re- Both (4.24) and (4.25) are linear functions of gression and LDA produce linear decision boundaries. The only difference are estimated and β between the two approaches lies in the fact that β 1 0 and c are computed using the esti- using maximum likelihood, whereas c 1 0 mated mean and variance from a normal distribution. This same connection between LDA and logistic regression also holds for multidimensional data p> with 1. Since logistic regression and LDA differ only in their fitting procedures, one might expect the two approaches to give similar results. This is often, but not always, the case. LDA assumes that the observations are drawn from a Gaussian distribution with a common covariance matrix in each class, and so can provide some improvements over logistic regression when this assumption approximately holds. Conversely, logistic regression can outperform LDA if these Gaussi an assumptions are not met. Recall from Chapter 2 that KNN takes a completely different approach from the classifiers seen in this chapter. In order to make a prediction for an observation X = x ,the K training observations that are closest to x are identified. Then X is assigned to the class to which the plurality of these observations belong. Hence KNN is a completely non-parametric approach: no assumptions are made about the shape of the decision boundary. There- fore, we can expect this approach to dominate LDA and logistic regression when the decision boundary is highly non-linear. On the other hand, KNN does not tell us which predictors are important; we don’t get a table of coefficients as in Table 4.3.

167 152 4. Classification SCENARIO 1 SCENARIO 2 SCENARIO 3 0.45 0.45 0.30 0.40 0.40 0.25 0.35 0.35 0.30 0.30 0.20 0.25 0.25 0.15 0.20 KNN−1 KNN−CV Logistic LDA QDA QDA LDA KNN−1 KNN−CV QDA Logistic LDA Logistic KNN−1 KNN−CV Boxplots of the test error rates for each of the linear scenarios FIGURE 4.10. described in the main text. SCENARIO 5 SCENARIO 6 SCENARIO 4 0.32 0.40 0.30 0.40 0.28 0.35 0.26 0.30 0.35 0.24 0.22 0.25 0.30 0.20 0.20 0.18 QDA KNN−1 KNN−CV LDA Logistic QDA KNN−1 KNN−CV LDA Logistic QDA KNN−1 KNN−CV LDA Logistic FIGURE 4.11. Boxplots of the test error rates for each of the non-linear sce- narios described in the main text. Finally, QDA serves as a compromise between the non-parametric KNN method and the linear LDA and logistic regression approaches. Since QDA assumes a quadratic decision boundary, it can accurately model a wider range of problems than can the linear methods. Though not as flexible as KNN, QDA can perform better in the presence of a limited number of training observations because it does make some assumptions about the form of the decision boundary. To illustrate the performances of these four classification approaches, we generated data from six different scenarios. In three of the scenarios, the Bayes decision boundary is linear, and in the remaining scenarios it is non-linear. For each scenario, we produced 100 random training data sets. On each of these training sets, we fit each method to the data and e on a large test set. Results for the computed the resulting test error rat linear scenarios are shown in Figure 4.10, and the results for the non-linear scenarios are in Figure 4.11. The KNN method requires selection of K ,the number of neighbors. We performed KNN with two values of K : K =1,

168 4.5 A Comparison of Classification Methods 153 K that was chosen automatically using an approach called and a value of cross-validation , which we discuss further in Chapter 5. In each of the six scen p = 2 predictors. The scenarios arios, there were were as follows: Scenario 1: There were 20 training observations in each of two classes. The observations within each class were uncorrelated random normal variables with a different mean in each class. The left-hand panel of Figure 4.10 shows that LDA performed well in this setting, as one would expect since this is the model assumed by LDA. KNN performed poorly because it paid a price in terms of variance that was not offset by a reduction in bias. QDA also performed worse than LDA, since it fit a more flexible classifier than necessary. Since logistic regression assumes a linear decision boundary, its results were only slightly inferior to those of LDA. Scenario 2: Details are as in Scenario 1, except that within each class, the two predictors had a correlation of 0 . 5. The center panel − of Figure 4.10 indicates little change in the relative performances of the methods as compared to the previous scenario. -distribution , with and t from the X Scenario 3: We generated X 1 2 t - -distribution has a similar shape to 50 observations per class. The t distribution the normal distribution, but it has a tendency to yield more extreme points—that is, more points that are far from the mean. In this set- ting, the decision boundary was still linear, and so fit into the logistic regression framework. The set-up violated the assumptions of LDA, since the observations were not drawn from a normal distribution. The right-hand panel of Figure 4.10 shows that logistic regression outperformed LDA, though both methods were superior to the other approaches. In particular, the QDA results deteriorated considerably as a consequence of non-normality. The data were generated fr om a normal distribution, Scenario 4: . 5 between the predictors in the first class, with a correlation of 0 and correlation of − 0 . 5 between the predictors in the second class. This setup corresponded to the QDA assumption, and resulted in quadratic decision boundaries. The left-hand panel of Figure 4.11 shows that QDA outperformed all of the other approaches. Scenario 5: Within each class, the obser vations were generated from a normal distribution with uncorrelated predictors. However, the re- 2 2 , X ,and X sponses were sampled from the logistic function using 1 2 × X as predictors. Consequently, there is a quadratic decision X 1 2 boundary. The center panel of Figure 4.11 indicates that QDA once again performed best, followed closely by KNN-CV. The linear meth- ods had poor performance.

169 154 4. Classification Details are as in the previous scenario, but the responses Scenario 6: were sampled from a more complicated non-linear function. As a re- sult, even the quadratic decision boundaries of QDA could not ade- quately model the data. The right-hand panel of Figure 4.11 shows that QDA gave slightly better results than the linear methods, while the much more flexible KNN-CV method gave the best results. But = 1 gave the worst results out of all methods. This KNN with K highlights the fact that even when the data exhibits a complex non- linear relationship, a non-parametric method such as KNN can still give poor results if the level of smoothness is not chosen correctly. These six examples illustrate that no one method will dominate the oth- ers in every situation. When the true decision boundaries are linear, then the LDA and logistic regression approaches will tend to perform well. When the boundaries are moderately non-linear, QDA may give better results. Finally, for much more complicated decision boundaries, a non-parametric approach such as KNN can be superior. But the level of smoothness for a non-parametric approach must be chosen carefully. In the next chapter we examine a number of approaches for choosing the correct level of smooth- ness and, in general, for selecting the best overall method. the regression setting we can accom- Finally, recall from Chapter 3 that in ween the predictors and the response modate a non-linear relationship bet by performing regression using transformations of the predictors. A similar approach could be taken in the classification setting. For instance, we could 3 2 , , X X create a more flexible version of logistic regression by including 4 X and even as predictors. This may or may not improve logistic regres- sion’s performance, depending on whether the increase in variance due to the added flexibility is offset by a sufficiently large reduction in bias. We could do the same for LDA. If we added all possible quadratic terms and cross-products to LDA, the form of the model would be the same as the QDA model, although the parameter estimates would be different. This device allows us to move somewhere between an LDA and a QDA model. 4.6 Lab: Logistic Regression, LDA, QDA, and KNN 4.6.1 The Stock Market Data We will begin by examining some numerical and graphical summaries of the Smarket ISLR library. This data set consists of data, which is part of the percentage returns for the S&P 500 stock index over 1 , 250 days, from the beginning of 2001 until the end of 2005. For each date, we have recorded Lag1 the five previous trading days, the percentage returns for each of through Lag5 . We have also recorded Volume (the number of shares traded

170 4.6 Lab: Logistic Regression, LDA, QDA, and KNN 155 Today (the percentage return on the date on the previous day, in billions), in question) and (whether the market was or Down on this Up Direction date). > library(ISLR) > names(Smarket) [1] "Year" "Lag1" "Lag2" "Lag3" "Lag4" [6] "Lag5" "Volume" "Today" "Direction" > dim(Smarket) [1] 1250 9 > summary(Smarket) Year Lag1 Lag2 Min. :2001 Min. :-4.92200 Min. :-4.92200 1st Qu.:2002 1st Qu.:-0.63950 1st Qu.:-0.63950 Median :2003 Median : 0.03900 Median : 0.03900 Mean :2003 Mean : 0.00383 Mean : 0.00392 3rd Qu.:2004 3rd Qu.: 0.59675 3rd Qu.: 0.59675 Max. :2005 Max. : 5.73300 Max. : 5.73300 Lag3 Lag4 Lag5 Min. :-4.92200 Min. :-4.92200 Min. :-4.92200 1st Qu.:-0.64000 1st Qu.:-0.64000 1st Qu.:-0.64000 Median : 0.03850 Median : 0.03850 Median : 0.03850 Mean : 0.00172 Mean : 0.00164 Mean : 0.00561 3rd Qu.: 0.59675 3rd Qu.: 0.59675 3rd Qu.: 0.59700 Max. : 5.73300 Max. : 5.73300 Max. : 5.73300 Volume Today Direction Min. :0.356 Min. :-4.92200 Down:602 1st Qu.:1.257 1st Qu.:-0.63950 Up :648 Median :1.423 Median : 0.03850 Mean :1.478 Mean : 0.00314 3rd Qu.:1.642 3rd Qu.: 0.59675 Max. :3.152 Max. : 5.73300 > pairs(Smarket) The cor() function produces a matrix that contains all of the pairwise correlations among the predictors in a data set. The first command below gives an error message because the Direction variable is qualitative. > cor(Smarket) Error in cor(Smarket) : ’x’ must be numeric > cor(Smarket[,-9]) Year Lag1 Lag2 Lag3 Lag4 Lag5 Year 1.0000 0.02970 0.03060 0.03319 0.03569 0.02979 Lag1 0.0297 1.00000 -0.02629 -0.01080 -0.00299 -0.00567 Lag2 0.0306 -0.02629 1.00000 -0.02590 -0.01085 -0.00356 Lag3 0.0332 -0.01080 -0.02590 1.00000 -0.02405 -0.01881 Lag4 0.0357 -0.00299 -0.01085 -0.02405 1.00000 -0.02708 Lag5 0.0298 -0.00567 -0.00356 -0.01881 -0.02708 1.00000 Volume 0.5390 0.04091 -0.04338 -0.04182 -0.04841 -0.02200 Today 0.0301 -0.02616 -0.01025 -0.00245 -0.00690 -0.03486 Volume Today Year 0.5390 0.03010

171 156 4. Classification Lag1 0.0409 -0.02616 Lag2 -0.0434 -0.01025 Lag3 -0.0418 -0.00245 Lag4 -0.0484 -0.00690 Lag5 -0.0220 -0.03486 Volume 1.0000 0.01459 Today 0.0146 1.00000 As one would expect, the correlations between the lag variables and to- day’s returns are close to zero. In other words, there appears to be little correlation between today’s returns and previous days’ returns. The only Year Volume . By plotting the data we and substantial correlation is between see that Volume is increasing over time. In other words, the average number of shares traded daily increased from 2001 to 2005. > attach(Smarket) > plot(Volume) 4.6.2 Logistic Regression Next, we will fit a logistic regression model in order to predict Direction Lag1 through Lag5 and Volume using glm() function fits generalized .The glm() linear models , a class of models that includes logistic regression. The syntax generalized of the function is similar to that of lm() ,exceptthatwemustpassin glm() linear model the argument family=binomial R to run a logistic regression in order to tell f generalized linear model. rather than some other type o ∼ Lag1+Lag2+Lag3+Lag4+Lag5+ Volume , > glm.fit=glm(Direction data=Smarket ,family=binomial) > summary(glm.fit) Call: glm(formula = Direction Lag1 + Lag2 + Lag3 + Lag4 + Lag5 ∼ + Volume , family = binomial, data = Smarket) Deviance Residuals: Min 1Q Median 3Q Max -1.45 -1.20 1.07 1.15 1.33 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.12600 0.24074 -0.52 0.60 Lag1 -0.07307 0.05017 -1.46 0.15 Lag2 -0.04230 0.05009 -0.84 0.40 Lag3 0.01109 0.04994 0.22 0.82 Lag4 0.00936 0.04997 0.19 0.85 Lag5 0.01031 0.04951 0.21 0.83 Volume 0.13544 0.15836 0.86 0.39

172 4.6 Lab: Logistic Regression, LDA, QDA, and KNN 157 (Dispersion parameter for binomial family taken to be 1) Null deviance: 1731.2 on 1249 degrees of freedom Residual deviance: 1727.6 on 1243 degrees of freedom AIC: 1742 Number of Fisher Scoring iterations: 3 The smallest p-value here is associated with . The negative coefficient Lag1 for this predictor suggests that if the market had a positive return yesterday, 15, the p-value then it is less likely to go up today. However, at a value of 0 . is still relatively large, and so there is no clear evidence of a real association Lag1 and Direction . between We use the function in order to access just the coefficients for this coef() fitted model. We can also use the summary() function to access particular aspects of the fitted model, such as the p-values for the coefficients. > coef(glm.fit) (Intercept) Lag1 Lag2 Lag3 Lag4 -0.12600 -0.07307 -0.04230 0.01109 0.00936 Lag5 Volume 0.01031 0.13544 > summary(glm.fit)$coef Estimate Std. Error z value Pr(>|z|) (Intercept) -0.12600 0.2407 -0.523 0.601 Lag1 -0.07307 0.0502 -1.457 0.145 Lag2 -0.04230 0.0501 -0.845 0.398 Lag3 0.01109 0.0499 0.222 0.824 Lag4 0.00936 0.0500 0.187 0.851 Lag5 0.01031 0.0495 0.208 0.835 Volume 0.13544 0.1584 0.855 0.392 > summary(glm.fit)$coef[,4] (Intercept) Lag1 Lag2 Lag3 Lag4 0.601 0.145 0.398 0.824 0.851 Lag5 Volume 0.835 0.392 predict() function can be used to predict the probability that the The type="response" market will go up, given values of the predictors. The option tells R to output probabilities of the form P ( Y =1 | X ), as opposed to other information such as the logit. If no data set is supplied to the function, then the probabilities are computed for the training predict() data that was used to fit the logistic regression model. Here we have printed only the first ten probabilities. We know that these values correspond to the probability of the market going up, rather than down, because the contrasts() function indicates that R has created a dummy variable with Up . a1for > glm.probs=predict(glm.fit,type="response") > glm.probs[1:10] 12345678910 0.507 0.481 0.481 0.515 0.511 0.507 0.493 0.509 0.518 0.489

173 158 4. Classification > contrasts(Direction) Up Down 0 Up 1 In order to make a prediction as to whether the market will go up or down on a particular day, we must convert these predicted probabilities Up or Down . The following two commands create a vector into class labels, of class predictions based on whether the predicted probability of a market . 5. rthanorlessthan0 increase is greate > glm.pred=rep("Down" ,1250) > glm.pred[glm.probs >.5]="Up" Down The first command creates a vector of 1,250 elements. The second line Up all of the elements for which the predicted probability of a transforms to market increase exceeds 0 . 5. Given these predictions, the function table() table() can be used to produce a confusion matrix in order to determine how many observations were correctly or incorrectly classified. > table(glm.pred,Direction) Direction glm.pred Down Up Down 145 141 Up 457 507 > (507+145) /1250 [1] 0.5216 > mean(glm.pred==Direction) [1] 0.5216 The diagonal elements of the confusion matrix indicate correct predictions, while the off-diagonals represent incorrect predictions. Hence our model correctly predicted that the market would go up on 507 days and that it would go down on 145 days, for a total of 507 + 145 = 652 correct mean() function can be used to compute the fraction of predictions. The days for which the prediction was correct. In this case, logistic regression correctly predicted the movement of the market 52 . 2% of the time. At first glance, it appears that the logistic regression model is working a little better than random guessing. However, this result is misleading because we trained and tested the model on the same set of 1 , 250 observa- tions. In other words, 100 52 . 2=47 . 8 % is the training error rate. As we − have seen previously, the training error rate is often overly optimistic—it tends to underestimate the test error rate. In order to better assess the ac- curacy of the logistic regression model in this setting, we can fit the model using part of the data, and then examine how well it predicts the held out data. This will yield a more realistic error rate, in the sense that in prac- tice we will be interested in our model’s performance not on the data that we used to fit the model, but rather on days in the future for which the market’s movements are unknown.

174 4.6 Lab: Logistic Regression, LDA, QDA, and KNN 159 first create a vector corresponding To implement this strategy, we will to the observations from 2001 through 2004. We will then use this vector to create a held out data set of observations from 2005. > train=(Year<2005) > Smarket.2005=Smarket[!train,] > dim(Smarket.2005) [1] 252 9 > Direction .2005=Direction [!train] is a vector of 1 , The object train 250 elements, corresponding to the ob- servations in our data set. The elements of the vector that correspond to TRUE , whereas those that observations that occurred before 2005 are set to correspond to observations in 2005 are set to . The object train is FALSE TRUE and FALSE . Boolean vectors Boolean a vector, since its elements are boolean can be used to obtain a subset of the rows or columns of a matrix. For instance, the command would pick out a submatrix of the Smarket[train,] stock market data set, corresponding only to the dates before 2005, since train TRUE .The ! symbol are those are the ones for which the elements of can be used to reverse all of the elements of a Boolean vector. That is, !train is a vector similar to train , except that the elements that are TRUE in FALSE in !train , and the elements that are FALSE get swapped to train train get swapped to TRUE in !train . Therefore, Smarket[!train,] yields in a submatrix of the stock market data containing only the observations for which train is FALSE —that is, the observations with dates in 2005. The output above indicates that there are 252 such observations. We now fit a logistic regression model using only the subset of the obser- subset argument. vations that correspond to dates before 2005, using the We then obtain predicted probabilities of the stock market going up for each of the days in our test set—that is, for the days in 2005. ∼ Lag1+Lag2+Lag3+Lag4+Lag5+ Volume , > glm.fit=glm(Direction data=Smarket ,family=binomial,subset=train) > glm.probs=predict(glm.fit,Smarket.2005,type="response") Notice that we have trained and test ed our model on two completely sep- arate data sets: training was performed using only the dates before 2005, and testing was performed using only the dates in 2005. Finally, we com- pute the predictions for 2005 and compare them to the actual movements of the market over that time period. > glm.pred=rep("Down",252) > glm.pred[glm.probs >.5]="Up" > table(glm.pred,Direction .2005) Direction .2005 glm.pred Down Up Down 77 97 Up 34 44 > mean(glm.pred==Direction .2005)

175 160 4. Classification [1] 0.48 > mean(glm.pred!=Direction .2005) [1] 0.52 notation means , and so the last command computes The not equal to != the test set error rate. The results are r ather disappointing: the test error rate is 52 %, which is worse than random guessing! Of course this result is not all that surprising, given that one would not generally expect to be able to use previous days’ returns to predict future market performance. (After all, if it were possible to do so, then the authors of this book would be out striking it rich rather than writing a statistics textbook.) We recall that the logistic regression model had very underwhelming p- ictors, and that the smallest p-value, values associated with all of the pred Lag1 . Perhaps by removing the though not very small, corresponded to Direction ,wecan variables that appear not to be helpful in predicting obtain a more effective model. After all, using predictors that have no relationship with the response tends to cause a deterioration in the test error rate (since such predictors cause an increase in variance without a corresponding decrease in bias), and so removing such predictors may in turn yield an improvement. Below we have refit the logistic regression using Lag1 and Lag2 , which seemed to have the highest predictive power in just the original logistic regression model. > glm.fit=glm(Direction ∼ Lag1+Lag2,data=Smarket , family=binomial, subset=train) > glm.probs=predict(glm.fit,Smarket.2005,type="response") > glm.pred=rep("Down",252) > glm.pred[glm.probs >.5]="Up" > table(glm.pred,Direction .2005) Direction .2005 glm.pred Down Up Down 35 35 Up 76 106 > mean(glm.pred==Direction .2005) [1] 0.56 > 106/(106+76) [1] 0.582 Now the results appear to be more promising: 56 % of the daily movements have been correctly predicted. The con fusion matrix suggests that on days when logistic regression predicts that the market will decline, it is only correct 50 % of the time. However, on days when it predicts an increase in the market, it has a 58 % accuracy rate. Suppose that we want to predict the returns associated with particular Lag1 and Lag2 . In particular, we want to predict Direction on a values of day when Lag1 and Lag2 equal 1.2 and 1.1, respectively, and on a day when predict() function. they equal 1.5 and − 0.8. We do this using the

176 4.6 Lab: Logistic Regression, LDA, QDA, and KNN 161 > predict(glm.fit,newdata=data.frame(Lag1=c(1.2,1.5), Lag2=c(1.1,-0.8)),type="response") 12 0.4791 0.4961 4.6.3 Linear Discriminant Analysis Now we will perform LDA on the ,wefitaLDAmodel Smarket R data. In using the MASS library. Notice that the lda() function, which is part of the lda() function is identical to that of lda() , and to that of syntax for the lm() except for the absence of the option. We fit the model using glm() family only the observations before 2005. > library(MASS) ∼ Lag1+Lag2,data=Smarket , > lda.fit=lda(Direction subset=train) > lda.fit Call: lda(Direction Lag2, data = Smarket , subset = train) Lag1 + ∼ groups: probabilities of Prior Down Up 0.492 0.508 Group means: Lag1 Lag2 Down 0.0428 0.0339 Up -0.0395 -0.0313 Coefficients of linear discriminants: LD1 .642 Lag1 -0 Lag2 -0 .514 > plot(lda.fit) The LDA output indicates that ˆ π 508; in other words, =0 . 492 and ˆ π . =0 2 1 . 2 % of the training observations correspond to days during which the 49 market went down. It also provides the group means; these are the average of each predictor within each class, and are used by LDA as estimates . These suggest that there is a tendency for the previous 2 days’ of μ k returns to be negative on days when the market increases, and a tendency for the previous days’ returns to be positive on days when the market coefficients of linear discriminants declines. The output provides the linear and Lag2 that are used to form the LDA decision rule. Lag1 combination of X = x in In other words, these are the multipliers of the elements of (4.19). If − 0 . 642 × Lag1 − 0 . 514 × Lag2 is large, then the LDA classifier will predict a market increase, and if it is small, then the LDA classifier will plot() function produces plots of the linear predict a market decline. The discriminants , obtained by computing − 0 . 642 × Lag1 − 0 . 514 × Lag2 for each of the training observations.

177 162 4. Classification predict() function returns a list with three elements. The first ele- The ment, , contains LDA’s predictions about the movement of the market. class The second element, k th column contains the posterior , is a matrix whose th posterior probability that the corresponding observation belongs to the k x contains the linear discriminants, class, computed from (4.10). Finally, described earlier. > lda.pred=predict(lda.fit, Smarket.2005) > names(lda.pred) [1] "class" "posterior" "x" As we observed in Section 4.5, the LDA and logistic regression predictions are almost identical. > lda.class=lda.pred$class > table(lda.class,Direction .2005) Direction .2005 lda.pred Down Up Down 35 35 Up 76 106 > mean(lda.class==Direction .2005) [1] 0.56 Applying a 50 % threshold to the posterior probabilities allows us to recre- ate the predictions contained in lda.pred$class . > sum(lda.pred$posterior[,1]> =.5) [1] 70 > sum(lda.pred$posterior[,1]<.5) [1] 182 Notice that the posterior probability output by the model corresponds to decrease : the probability that the market will > lda.pred$posterior [1:20,1] > lda.class[1:20] If we wanted to use a posterior probability threshold other than 50 % in order to make predictions, then we could easily do so. For instance, suppose that we wish to predict a market decrea se only if we are very certain that the market will indeed decrease on that day—say, if the posterior probability is at least 90 %. > sum(lda.pred$posterior[,1]>.9) [1] 0 No days in 2005 meet that threshold! In fact, the greatest posterior prob- ability of decrease in all of 2005 was 52 02%. . 4.6.4 Quadratic Discriminant Analysis We will now fit a QDA model to the Smarket data. QDA is implemented library. The R using the qda() function, which is also part of the MASS in qda() syntax is identical to that of lda() .

178 4.6 Lab: Logistic Regression, LDA, QDA, and KNN 163 ∼ Lag1+Lag2,data=Smarket , > qda.fit=qda(Direction subset=train) > qda.fit Call: subset = train) Lag2, qda(Direction Smarket , ∼ Lag1 + data = probabilities of groups: Prior Down Up 0.492 0.508 Group means: Lag1 Lag2 Down 0.0428 0.0339 Up -0.0395 -0.0313 The output contains the group means. But it does not contain the coef- ficients of the linear discriminants, because the QDA classifier involves a predict() quadratic, rather than a linear, function of the predictors. The function works in exactly the same fashion as for LDA. > qda.class=predict(qda.fit,Smarket.2005)$class > table(qda.class,Direction .2005) Direction .2005 qda.class Down Up Down 30 20 Up 81 121 > mean(qda.class==Direction .2005) [1] 0.599 Interestingly, the QDA predictions are accurate almost 60 % of the time, even though the 2005 data was not used to fit the model. This level of accu- racy is quite impressive for stock market data, which is known to be quite hard to model accurately. This suggests that the quadratic form assumed by QDA may capture the true relationship more accurately than the linear forms assumed by LDA and logistic regression. However, we recommend evaluating this method’s performance on a larger test set before betting that this approach will consistently beat the market! K -Nearest Neighbors 4.6.5 We will now perform KNN using the knn() function, which is part of the knn() class library. This function works rather differently from the other model- fitting functions that we have encountered thus far. Rather than a two-step approach in which we first fit the model and then we use the model to make knn() forms predictions using a single command. The function predictions, requires four inputs. 1. A matrix containing the predictors associated with the training data, train.X below. labeled 2. A matrix containing the predictors associated with the data for which we wish to make predictions, labeled test.X below.

179 164 4. Classification 3. A vector containing the class labels for the training observations, labeled train.Direction below. , the number of nearest neighbors to be used by the 4. A value for K classifier. function, short for column bind ,tobindthe cbind() and Lag1 We use the cbind() Lag2 variables together into two matrices, one for the training set and the other for the test set. > library(class) > train.X=cbind(Lag1,Lag2)[train,] > test.X=cbind(Lag1,Lag2)[!train,] > train.Direction=Direction[train] knn() function can be used to predict the market’s movement for Now the the dates in 2005. We set a random seed before we apply because knn() if several observations are tied as nearest neighbors, then R will randomly break the tie. Therefore, a seed must be set in order to ensure reproducibil- ity of results. > set.seed(1) > knn.pred=knn(train.X,test.X,train.Direction,k=1) > table(knn.pred,Direction .2005) Direction .2005 knn.pred Down Up Down 43 58 Up 68 83 > (83+43)/252 [1] 0.5 The results using K = 1 are not very good, since only 50 % of the observa- tions are correctly predicte K = 1 results in an d. Of course, it may be that overly flexible fit to the data. Below, we repeat the analysis using K =3. > knn.pred=knn(train.X,test.X,train.Direction,k=3) > table(knn.pred,Direction .2005) Direction .2005 knn.pred Down Up Down 48 54 Up 63 87 > mean(knn.pred==Direction .2005) [1] 0.536 The results have improved slightly. But increasing K further turns out to provide no further improvements. It appears that for this data, QDA provides the best results of the methods that we have examined so far. 4.6.6 An Application to Caravan Insurance Data Finally, we will apply the KNN approach to the Caravan data set, which is part of the ISLR library. This data set includes 85 predictors that measure

180 4.6 Lab: Logistic Regression, LDA, QDA, and KNN 165 demographic characteristics for 5,822 individuals. The response variable is Purchase , which indicates whether or not a given individual purchases a caravan insurance policy. In this data set, only 6 % of people purchased caravan insurance. > dim(Caravan) [1] 5822 86 > attach(Caravan) > summary(Purchase) No Yes 5474 348 > 348/5822 [1] 0.0598 Because the KNN classifier predicts the class of a given test observation by identifying the observations that are nearest to it, the scale of the variables matters. Any variables that are on a large scale will have a much larger distance between the observations, and hence on the KNN effect on the classifier, than variables that are on a small scale. For instance, imagine a and age (measured in dollars salary data set that contains two variables, KNN is concerned, a difference of $1,000 and years, respectively). As far as in salary is enormous compared to a difference of 50 years in age. Conse- salary will drive the KNN classification results, and age will have quently, almost no effect. This is contrary to our intuition that a salary difference of $1 , 000 is quite small compared to an age difference of 50 years. Further- more, the importance of scale to the KNN classifier leads to another issue: salary age in minutes, in Japanese yen, or if we measured if we measured then we’d get quite different classification results from what we get if these two variables are measured in dollars and years. standardize thedatasothatall A good way to handle this problem is to standardize variables are given a mean of zero and a standard deviation of one. Then all variables will be on a comparable scale. The scale() function does just scale() this. In standardizing the data, we exclude column 86, because that is the variable. Purchase qualitative scale(Caravan[,-86]) > standardized.X= > var(Caravan[ ,1]) [1] 165 > var(Caravan[ ,2]) [1] 0.165 > var(standardized.X[,1]) [1] 1 > var(standardized.X[,2]) [1] 1 Now every column of standardized.X has a standard deviation of one and a mean of zero. We now split the observations into a test set, containing the first 1,000 observations, and a training set, containing the remaining observations.

181 166 4. Classification K = 1, and evaluate its We fit a KNN model on the training data using performance on the test data. > test=1:1000 > train.X= standardized.X[-test,] > test.X= standardized.X[test,] > train.Y=Purchase[-test] > test.Y=Purchase[test] > set.seed(1) > knn.pred=knn(train.X,test.X,train.Y,k=1) > mean(test.Y!=knn.pred) [1] 0.118 > mean(test.Y!="No") [1] 0.059 The vector is numeric, with values from 1 through 1 , 000. Typing test standardized.X[test,] yields the submatrix of the data containing the ob- , servations whose indices range from 1 to 1 000, whereas typing standardized.X[-test,] yields the submatrix containing the observations whose indices do range from 1 to 1 , 000. The KNN error rate on the not 1,000 test observations is just under 12 %. At first glance, this may ap- pear to be fairly good. However, sin ce only 6 % of customers purchased No insurance, we could get the error rat e down to 6 % by always predicting regardless of the values of the predictors! Suppose that there is some non-trivial cost to trying to sell insurance to a given individual. For instance, perhaps a salesperson must visit each potential customer. If the company tries to sell insurance to a random selection of customers, then the success rate will be only 6 %, which may be far too low given the costs involved. Instead, the company would like to try to sell insurance only to customers who are likely to buy it. So the overall error rate is not of interest. Instead, the fraction of individuals that buy insurance is of interest. are correctly predicted to It turns out that KNN with K = 1 does far better than random guessing among the customers that are predicted to buy insurance. Among 77 such customers, 9, or 11 . 7 %, actually do purchase insurance. This is double the rate that one would obtain from random guessing. > table(knn.pred,test.Y) test.Y knn.pred No Yes No 873 50 Yes 68 9 > 9/(68+9) [1] 0.117 Using K = 3, the success rate increases to 19 %, and with K =5therateis 26 . 7 %. This is over four times the rate that results from random guessing. It appears that KNN is finding some real patterns in a difficult data set!

182 4.6 Lab: Logistic Regression, LDA, QDA, and KNN 167 > knn.pred=knn(train.X,test.X,train.Y,k=3) > table(knn.pred,test.Y) test.Y knn.pred No Yes No 920 54 Yes 21 5 > 5/26 [1] 0.192 > knn.pred=knn(train.X,test.X,train.Y,k=5) > table(knn.pred,test.Y) test.Y knn.pred No Yes No 930 55 Yes 11 4 > 4/15 [1] 0.267 As a comparison, we can also fit a logistic regression model to the data. . If we use 0 5 as the predicted probability cut-off for the classifier, then we have a problem: only seven of the test observations are predicted to purchase insurance. Even worse, we are wrong about all of these! However, we are not required to use a cut-off of 0 5. If we instead predict a purchase . any time the predicted probability of purchase exceeds 0 . 25, we get much better results: we predict that 33 people will purchase insurance, and we are correct for about 33 % of these people. This is over five times better than random guessing! ∼ > glm.fit=glm(Purchase .,data=Caravan ,family=binomial, subset=-test) Warning message: probabilities glm.fit: fitted numerically 0 or 1 occurred > glm.probs=predict(glm.fit,Caravan[ test,], type="response") > glm.pred=rep("No",1000) > glm.pred[glm.probs >.5]="Yes" > table(glm.pred,test.Y) test.Y glm.pred No Yes No 934 59 Yes 7 0 > glm.pred=rep("No",1000) > glm.pred[glm.probs >.25]="Yes" > table(glm.pred,test.Y) test.Y glm.pred No Yes No 919 48 Yes 22 11 > 11/(22+11) [1] 0.333

183 168 4. Classification 4.7 Exercises Conceptual 1. Using a little bit of algebra, prove that (4.2) is equivalent to (4.3). In other words, the logistic function representation and logit represen- tation for the logistic regression model are equivalent. 2. It was stated in the text that classifying an observation to the class for which (4.12) is largest is equivalent to classifying an observation to the class for which (4.13) is largest. Prove that this is the case. In other words, under the assumption that the observations in the k th 2 ) distribution, the Bayes’ classifier ,σ μ N class are drawn from a ( k assigns an observation to the class for which the discriminant function is maximized. 3. This problem relates to the QDA model, in which the observations within each class are drawn from a normal distribution with a class- specific mean vector and a class sp ecific covariance matrix. We con- sider the simple case where = 1; i.e. there is only one feature. p Suppose that we have classes, and that if an observation belongs K k X comes from a one-dimensional normal dis- to the th class then 2 ,σ ). Recall that the density function for the ( μ tribution, X ∼ N k k one-dimensional normal distribution is given in (4.11). Prove that in this case, the Bayes’ classifier is not linear. Argue that it is in fact quadratic. Hint: For this problem, you should follow the arguments laid out in 2 2 σ Section 4.4.2, but without making the assumption that ... σ = . = 1 K p is large, there te nds to be a deteri- 4. When the number of features local approaches that oration in the performance of KNN and other perform prediction using only observations that are near the test ob- servation for which a prediction must be made. This phenomenon is known as the curse of dimensionality , and it ties into the fact that curse of di- non-parametric approaches often perform poorly when is large. We p mensionality will now investigate this curse. (a) Suppose that we have a set of observations, each with measure- ments on =1feature, X . We assume that X is uniformly p (evenly) distributed on [0 , 1]. Associated with each observation is a response value. Suppose that we wish to predict a test obser- vation’s response using only observations that are within 10 % of the range of X closest to that test observation. For instance, in order to predict the response for a test observation with X =0 . 6,

184 4.7 Exercises 169 55 we will use observations in the range [0 . 65]. On average, , . 0 what fraction of the available observations will we use to make the prediction? (b) Now suppose that we have a set of observations, each with X and . We assume that p measurements on X =2features, 2 1 ( X ,X ) are uniformly distributed on [0 , 1] × [0 , 1]. We wish to 2 1 predict a test observation’s response using only observations that and within 10 % of the range are within 10 % of the range of X 1 X of closest to that test observation. For instance, in order to 2 . 6and =0 X predict the response for a test observation with 1 X =0 . 35, we will use observations in the range [0 . 55 , 0 . 65] for 2 X and in the range [0 . 3 . On average, what fraction 0 . 4] for X , 2 1 of the available observations will we use to make the prediction? (c) Now suppose that we have a set of observations on = 100 fea- p tures. Again the observations are uniformly distributed on each feature, and again each feature ranges in value from 0 to 1. We n’s response using observations wish to predict a test observatio within the 10 % of each feature’s range that is closest to that test observation. What fraction of the available observations will we use to make the prediction? (d) Using your answers to parts (a)–(c), argue that a drawback of KNN when is large is that there are very few training obser- p vations “near” any given test observation. (e) Now suppose that we wish to make a prediction for a test obser- vation by creating a p -dimensional hypercube centered around the test observation that contains, on average, 10 % of the train- ing observations. For p =1 , 2, and 100, what is the length of each side of the hypercube? Comment on your answer. Note: A hypercube is a generalization of a cube to an arbitrary number of dimensions. When p =1 , a hypercube is simply a line segment, when p =2 it is a square, and when p = 100 it is a 100-dimensional cube. 5. We now examine the differences between LDA and QDA. (a) If the Bayes decision boundary is linear, do we expect LDA or QDA to perform better on the training set? On the test set? (b) If the Bayes decision boundary is non-linear, do we expect LDA or QDA to perform better on the training set? On the test set? (c) In general, as the sample size n increases, do we expect the test prediction accuracy of QDA relat ive to LDA to improve, decline, or be unchanged? Why?

185 170 4. Classification (d) True or False: Even if the Bayes decision boundary for a given problem is linear, we will probably achieve a superior test er- ror rate using QDA rather than LDA because QDA is flexible enough to model a linear decision boundary. Justify your answer. 6. Suppose we collect data for a group of students in a statistics class Y = hours studied, X = = undergrad GPA, and X with variables 1 2 receive an A. We fit a logistic regr ession and produce estimated ˆ ˆ ˆ = − 6 , 05 β =1. =0 . β , coefficient, β 1 0 2 (a) Estimate the probability that a student who studies for 40 h and . 5 gets an A in the class. has an undergrad GPA of 3 (b) How many hours would the student in part (a) need to study to have a 50 % chance of getting an A in the class? 7. Suppose that we wish to predict whether a given stock will issue a dividend this year (“Yes” or “No”) based on X , last year’s percent profit. We examine a large number of companies and discover that the ̄ X X = 10, mean value of for companies that issued a dividend was ̄ X = 0. In addition, the while the mean for those that didn’t was 2 = 36. Finally, for these two sets of companies was ˆ σ variance of X 80 % of companies issued dividends. Assuming that X follows a nor- mal distribution, predict the probability that a company will issue a dividend this year given th at its percentage profit was X =4last year. Hint: Recall that the density function for a normal random variable 2 2 1 / ) σ 2 μ − x ( − √ e . You will need to use Bayes’ theorem. f )= x ( is 2 πσ 2 8. Suppose that we take a data set, divide it into equally-sized training and test sets, and then try out two di fferent classification procedures. First we use logistic regression and get an error rate of 20 % on the training data and 30 % on the test data. Next we use 1-nearest neigh- bors (i.e. = 1) and get an average error rate (averaged over both K test and training data sets) of 18 %. Based on these results, which method should we prefer to use for classification of new observations? Why? 9. This problem has to do with odds . (a) On average, what fraction of people with an odds of 0.37 of defaulting on their credit card payment will in fact default? (b) Suppose that an individual has a 16 % chance of defaulting on her credit card payment. What are the odds that she will de- fault?

186 4.7 Exercises 171 Applied 10. This question should be answered using the Weekly data set, which is part of the package. This data is similar in nature to the ISLR , 089 data from this chapter’s lab, except that it contains 1 Smarket weekly returns for 21 years, from the beginning of 1990 to the end of 2010. Weekly (a) Produce some numerical and graphical summaries of the data. Do there appear to be any patterns? (b) Use the full data set to perform a logistic regression with as the response and the five lag variables plus Direction Volume as predictors. Use the summary function to print the results. Do any of the predictors appear to be statistically significant? If so, which ones? (c) Compute the confusion matrix and overall fraction of correct predictions. Explain what the confusion matrix is telling you about the types of mistakes made by logistic regression. (d) Now fit the logistic regression model using a training data period Lag2 as the only predictor. Compute the from 1990 to 2008, with confusion matrix and the overall fraction of correct predictions for the held out data (that is, the data from 2009 and 2010). (e) Repeat (d) using LDA. (f) Repeat (d) using QDA. (g) Repeat (d) using KNN with K =1. (h) Which of these methods appears to provide the best results on this data? binations of predictors, includ- (i) Experiment with different com ing possible transformations and interactions, for each of the methods. Report the variables, method, and associated confu- sion matrix that appears to provide the best results on the held out data. Note that you should also experiment with values for in the KNN classifier. K 11. In this problem, you will develop a model to predict whether a given Auto data set. car gets high or low gas mileage based on the (a) Create a binary variable, mpg01 mpg contains , that contains a 1 if a value above its median, and a 0 if mpg contains a value below median() its median. You can compute the median using the function. Note you may find it helpful to use the data.frame() function to create a single data set containing both mpg01 and the other Auto variables.

187 172 4. Classification (b) Explore the data graphically in order to investigate the associ- ation between and the other features. Which of the other mpg01 features seem most likely to be useful in predicting ?Scat- mpg01 terplots and boxplots may be useful tools to answer this ques- tion. Describe your findings. (c) Split the data into a training set and a test set. mpg01 (d) Perform LDA on the training data in order to predict using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained? (e) Perform QDA on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained? (f) Perform logistic regression on the training data in order to pre- mpg01 using the variables that seemed most associated with dict in (b). What is the test error of the model obtained? mpg01 ,in (g) Perform KNN on the training data, with several values of K order to predict mpg01 . Use only the variables that seemed most associated with mpg01 in (b). What test errors do you obtain? Which value of seems to perform the best on this data set? K 12. This problem involves writing functions. Power() , that prints out the result of raising 2 (a) Write a function, to the 3rd power. In other words, your function should compute 3 2 and print out the results. Hint: Recall that x^a raises x to the power a .Usethe print() function to output the result. (b) Create a new function, , that allows you to pass Power2() any two numbers, x and a , and prints out the value of x^a .Youcan do this by beginning your function with the line > Power2=function(x,a){ You should be able to call your function by entering, for instance, > Power2(3,8) 8 on the command line. This should output the value of 3 ,namely, 6 , 561. 3 Power2() , function that you just wrote, compute 10 (c) Using the 17 3 , and 131 . 8 Power3() , that actually returns the (d) Now create a new function, x^a as an R object, rather than simply printing it to the result screen. That is, if you store the value x^a in an object called result within your function, then you can simply return() this return() result, using the following line:

188 4.7 Exercises 173 return(result) The line above should be the last line in your function, before the symbol. } 2 f ( x )= x Power3() . function, create a plot of (e) Now using the x -axis should display a range of integers from 1 to 10, and The 2 . Label the axes appropriately, and x y the -axis should display use an appropriate title for the figure. Consider displaying either x y -axis, or both on the log-scale. You can do this the -axis, the log=‘‘x’’ as arguments to log=‘‘y’’ ,or log=‘‘xy’’ , by using the plot() function. PlotPower() , that allows you to create a plot (f) Create a function, against x^a for a fixed and for a range of values of x .For x a of instance, if you call > PlotPower (1:10,3) x then a plot should be created with an -axis taking on values 3 3 3 , ,..., 10 . 2 , 2 ,..., 10, and a y -axis taking on values 1 1 Boston data set, fit classification models in order to predict 13. Using the whether a given suburb has a crime rate above or below the median. Explore logistic regression, LDA, and KNN models using various sub- sets of the predictors. Describe your findings.

189

190 5 Resampling Methods are an indispensable tool in modern statistics. They Resampling methods involve repeatedly drawing samples from a training set and refitting a model of interest on each sample in order to obtain additional information about the fitted model. For example, in order to estimate the variability of a linear regression fit, we can repeatedly draw different samples from the training data, fit a linear regression to each new sample, and then examine the extent to which the resulting fits differ. Such an approach may allow us to obtain information that would not be available from fitting the model only once using the original training sample. Resampling approaches can be computationally expensive, because they involve fitting the same statistical method multiple times using different ta. However, due to recen subsets of the training da t advances in computing power, the computational requirements of resampling methods generally are not prohibitive. In this chapter, we discuss two of the most commonly used resampling methods, and the bootstrap . Both methods cross-validation are important tools in the practical application of many statistical learning procedures. For example, cross-valid ation can be used to estimate the test error associated with a given statistica l learning method in order to evaluate its performance, or to select the appropriate level of flexibility. The process of evaluating a model’s performance is known as model assessment ,whereas model the process of selecting the proper level of flexibility for a model is known as assessment model selection . The bootstrap is used in sever al contexts, most commonly model a parameter estimate or of a given to provide a measure of accuracy of selection statistical learning method. G. James et al., An Introduction to Statistical Learning: with Applications in R 175 , 5, Springer Texts in Statistics 103, DOI 10.1007/978-1-4614-7138-7 © Springer Science+Business Media New York 2013

191 176 5. Resampling Methods 5.1 Cross-Validation In Chapter 2 we discuss the distinction between the test error rate and the . The test error is the average error that results from using training error rate a statistical learning method to predict the response on a new observation— that is, a measurement that was not used in training the method. Given a data set, the use of a particular statistical learning method is warranted if it results in a low test error. The test error can be easily calculated if a designated test set is available. Unfortunately, this is usually not the case. In contrast, the training error can be easily calculated by applying the statistical learning method to the observations used in its training. But as we saw in Chapter 2, the training error rate often is quite different from the test error rate, and in particular the former can dramatically underestimate the latter. In the absence of a very large designated test set that can be used to e, a number of techniques can be used directly estimate the test error rat to estimate this quantity using the available training data. Some methods make a mathematical adjustment to the training error rate in order to estimate the test error rate. Such approaches are discussed in Chapter 6. In this section, we instead consider a class of methods that estimate the test error rate by holding out a subset of the training observations from the fitting process, and then applying the statistical learning method to those held out observations. In Sections 5.1.1–5.1.4, for simplicity we assume that we are interested in performing regression with a quantitative response. In Section 5.1.5 we consider the case of classification with a qualitative response. As we will see, the key concepts remain the same r egardless of whether the response is quantitative or qualitative. 5.1.1 The Validation Set Approach Suppose that we would like to estimate the test error associated with fit- ting a particular statistical learning method on a set of observations. The validation set approach , displayed in Figure 5.1, is a very simple strategy validation for this task. It involves randomly dividing the available set of observa- set approach training set and a validation set or tions into two parts, a .The hold-out set validation model is fit on the training set, and the fitted model is used to predict the set hold-out set responses for the observations in the validation set. The resulting validation set error rate—typically assessed using MSE in the case of a quantitative response—provides an estimate of the test error rate. Auto data set. Recall from We illustrate the validation set approach on the mpg Chapter 3 that there appears to be a n on-linear relationship between and horsepower , and that a model that predicts mpg using horsepower and 2 horsepower gives better results than a model that uses only a linear term. It is natural to wonder whether a cubic or higher-order fit might provide

192 5.1 Cross-Validation 177 n 1 2 3 7 22 13 91 A schematic display of the validation set approach. A set of n FIGURE 5.1. observations are randomly split into a training set (shown in blue, containing observations 7, 22, and 13, among others) and a validation set (shown in beige, and containing observation 91, among others). The statistical learning method is fit on the training set, and its performance is evaluated on the validation set. even better results. We answer this question in Chapter 3 by looking at the p-values associated with a cubic term and higher-order polynomial terms in a linear regression. But we could also answer this question using the validation method. We randomly split the 392 observations into two sets, a training set containing 196 of the data points, and a validation set containing the remaining 196 observations. The validation set error rates that result from fitting various regression models on the training sample and evaluating their performance on the validation sample, using MSE as a measure of validation set error, are shown in the left-hand panel of Figure 5.2. The validation set MSE for the quadratic fit is considerably smaller than for the linear fit. However, the validation set MSE for the cubic fit is actually slightly larger than for the quadratic fit. This implies that including a cubic term in the regression does not lead to better prediction than simply using a quadratic term. Recall that in order to create the left-hand panel of Figure 5.2, we ran- domly divided the data set into two parts, a training set and a validation set. If we repeat the process of randomly splitting the sample set into two parts, we will get a somewhat different estimate for the test MSE. As an illustration, the right-hand panel of Figure 5.2 displays ten different vali- Auto data set, produced using ten different dation set MSE curves from the random splits of the observations into training and validation sets. All ten curves indicate that the model with a quadratic term has a dramatically smaller validation set MSE than the model with only a linear term. Fur- thermore, all ten curves indicate that there is not much benefit in including cubic or higher-order polynomial terms in the model. But it is worth noting that each of the ten curves results in a different test MSE estimate for each of the ten regression models considered. And there is no consensus among the curves as to which model results in the smallest validation set MSE. Based on the variability among these curves, all that we can conclude with any confidence is that the linear fit is not adequate for this data. The validation set approach is conceptually simple and is easy to imple- ment. But it has two potential drawbacks:

193 178 5. Resampling Methods 28 28 26 26 24 24 22 22 20 20 18 18 Mean Squared Error Mean Squared Error 16 16 246810 246810 Degree of Polynomial Degree of Polynomial The validation set approach was used on the Auto data set in FIGURE 5.2. mpg order to estimate the test error that results from predicting using polynomial functions of . Left: Validation error estimates for a single split into horsepower Right: training and validation data sets. The validation method was repeated ten times, each time using a different random split of the observations into a training set and a validation set. This illustrates the variability in the estimated test MSE that results from this approach. 1. As is shown in the right-hand panel of Figure 5.2, the validation esti- mate of the test error rate can be highly variable, depending on pre- cisely which observations are included in the training set and which observations are included in the validation set. 2. In the validation approach, only a subset of the observations—those that are included in the training set rather than in the validation set—are used to fit the model. Since statistical methods tend to per- form worse when trained on fewer observations, this suggests that the validation set error rate may tend to overestimate the test error rate for the model fit on the entire data set. In the coming subsections, we will present cross-validation , a refinement of the validation set approach that addresses these two issues. 5.1.2 Leave-One-Out Cross-Validation (LOOCV) is closely related to the validation Leave-one-out cross-validation leave-one- set approach of Section 5.1.1, but it attempts to address that method’s out cross- drawbacks. validation Like the validation set approach, LOOCV involves splitting the set of observations into two parts. However, instead of creating two subsets of ,y ) is used for the validation comparable size, a single observation ( x 1 1 make up the } ) ,y x ,y ( ) ,..., set, and the remaining observations x ( { 2 n n 2 n − 1 training training set. The statistical learning method is fit on the is made for the excluded observation, y observations, and a prediction ˆ 1 x using its value ) was not used in the fitting process, MSE . Since ( x = ,y 1 1 1 1

194 5.1 Cross-Validation 179 1 2 3 n n 1 2 3 n 1 2 3 n 1 2 3 · · · 1 2 3 n data points is repeat- n FIGURE 5.3. A schematic display of LOOCV. A set of edly split into a training set (shown in blue) containing all but one observation, and a validation set that contains only t hat observation (shown in beige). The test resulting MSE’s. The first training set error is then estimated by averaging the n contains all but observation 1, the second training set contains all but observation 2, and so forth. 2 provides an approximately unbiased estimate for the test error. ˆ y ( ) − y 1 1 is unbiased for the test error, it is a poor estimate But even though MSE 1 because it is highly variable, since it is based upon a single observation ,y ). x ( 1 1 x We can repeat the procedure by selecting ( ) for the validation ,y 2 2 − 1observations data, training the statistical learning procedure on the n 2 y ,y ) , ( . Repeat- ) ,y ) ,..., ( x ,y ) } , and computing MSE =( y − ˆ x x ( { 1 2 n 3 1 2 3 2 n ing this approach n squared errors, MSE times produces n ,..., MSE . 1 n n test error The LOOCV estimate for the test MSE is the average of these estimates: n ∑ 1 (5.1) . = MSE CV i n ( ) n i =1 A schematic of the LOOCV approach is illustrated in Figure 5.3. LOOCV has a couple of major advantages over the validation set ap- proach. First, it has far less bias. In LOOCV, we repeatedly fit the sta- n − 1observa- tistical learning method using training sets that contain tions, almost as many as are in the entire data set. This is in contrast to the validation set approach, in which the training set is typically around half the size of the original data set. Consequently, the LOOCV approach tends not to overestimate the test error rate as much as the validation set approach does. Second, in contrast to the validation approach which will yield different results when app lied repeatedly due to randomness in the training/validation set splits, performing LOOCV multiple times will

195 180 5. Resampling Methods LOOCV 10−fold CV 28 28 26 26 24 24 22 22 20 20 18 18 Mean Squared Error Mean Squared Error 16 16 8 10 2 4 6 8 6 10 4 2 Degree of Polynomial Degree of Polynomial Cross-validation was used on the data set in order to es- FIGURE 5.4. Auto timate the test error that results from predicting mpg using polynomial functions horsepower . Left: The LOOCV error curve. Right: 10 -fold CV was run nine of separate times, each with a different random split of the data into ten parts. The figure shows the nine slightly different CV error curves. always yield the same results: there is no randomness in the training/vali- dation set splits. Auto data set in order to obtain an estimate We used LOOCV on the of the test set MSE that results from fitting a linear regression model to mpg using polynomial functions of horsepower . The results are shown predict in the left-hand panel of Figure 5.4. LOOCV has the potential to be expensive to implement, since the model times. This can be very time consuming if n is large, and if has to be fit n each individual model is slow to fit. With least squares linear or polynomial regression, an amazing shortcut makes the cost of LOOCV the same as that of a single model fit! The following formula holds: ( ) n 2 ∑ y − ˆ y 1 i i , = (5.2) CV n ( ) − n h 1 i =1 i where ˆ y is the i th fitted value from the original least squares fit, and h is i i the leverage defined in (3.37) on page 98. This is like the ordinary MSE, /n . The leverage lies between 1 th residual is divided by 1 − h except the i i and 1, and reflects the amount that an observation influences its own fit. Hence the residuals for high-leverage points are inflated in this formula by exactly the right amount for this equality to hold. LOOCV is a very general method, and can be used with any kind of predictive modeling. For example we could use it with logistic regression or linear discriminant analysis, or any of the methods discussed in later

196 5.1 Cross-Validation 181 1 2 3 n 11 76 5 47 11 76 5 47 11 76 5 47 11 76 5 47 47 11 76 5 5 -fold CV. A set of FIGURE 5.5. observations is A schematic display of n randomly split into five non-overlapping groups. Each of these fifths acts as a validation set (shown in beige), and the remainder as a training set (shown in blue). The test error is estimated by averaging the five resulting MSE estimates. chapters. The magic formula (5.2) does not hold in general, in which case themodelhastoberefit n times. 5.1.3 k -Fold Cross-Validation k -fold CV . This approach involves randomly An alternative to LOOCV is -fold CV k groups, or , of approximately k folds dividing the set of observations into equal size. The first fold is treated as a validation set, and the method ,is 1 folds. The mean squared error, MSE k is fit on the remaining − 1 then computed on the observations in the held-out fold. This procedure is k times; each time, a different group of observations is treated repeated as a validation set. This process results in k estimates of the test error, , MSE ,..., MSE .The k -fold CV estimate is computed by averaging MSE 1 2 k these values, k ∑ 1 = MSE (5.3) . CV i k ( ) k =1 i k -fold CV approach. Figure 5.5 illustrates the It is not hard to see that LOOCV is a special case of k -fold CV in which k is set to equal n . In practice, one typically performs k -fold CV using k =5 = 10 rather than or k =5or k = 10. What is the advantage of using k k = n ? The most obvious advantage is computational. LOOCV requires fitting the statistical learning method n times. This has the potential to be computationally expensive (except f or linear models fit by least squares, in which case formula (5.2) can be used). But cross-validation is a very general approach that can be applied to almost any statistical learning method. Some statistical learning methods have computationally intensive fitting procedures, and so performing LOOCV may pose computational problems, especially if n is extremely large. In contrast, performing 10-fold

197 182 5. Resampling Methods 20 3.0 3.0 2.5 2.5 15 2.0 2.0 10 1.5 1.5 1.0 1.0 5 Mean Squared Error Mean Squared Error Mean Squared Error 0.5 0.5 0 0.0 0.0 20 10 20 20 10 2 5 2 2 10 5 5 Flexibility Flexibility Flexibility FIGURE 5.6. True and estimated test MSE for the simulated data sets in Fig- ures 2.9 ( left ), 2.10 ( center ),and2.11( right ). The true test MSE is shown in blue, the LOOCV estimate is shown as a black dashed line, and the 10 -fold CV estimate is shown in orange. The crosses indicate the minimum of each of the MSE curves. CV requires fitting the learning procedure only ten times, which may be much more feasible. As we see in Section 5.1.4, there also can be other non-computational advantages to performing 5-fold or 10-fold CV, which involve the bias-variance trade-off. The right-hand panel of Figure 5.4 displays nine different 10-fold CV Auto data set, each resulting from a different random estimates for the split of the observations into ten folds. As we can see from the figure, there is some variability in the CV estimates as a result of the variability in how the observations are divided into ten folds. But this variability is typically much lower than the variability in the test error estimates that results from the validation set approach (right-hand panel of Figure 5.2). When we examine real data, we do not know the true test MSE, and so it is difficult to determine the accuracy of the cross-validation estimate. However, if we examine simulated data, then we can compute the true test MSE, and can thereby evaluate the accuracy of our cross-validation results. In Figure 5.6, we plot the cross-validation estimates and true test error rates that result from applying smoothing splines to the simulated data sets illustrated in Figures 2.9–2.11 of Chapter 2. The true test MSE is displayed in blue. The black dashed and orange solid lines respectively show the estimated LOOCV and 10-fold CV estimates. In all three plots, the two cross-validation estimates are very similar. In the right-hand panel of Figure 5.6, the true test MSE and the cross-validation curves are almost identical. In the center panel of Figure 5.6, the two sets of curves are similar at the lower degrees of flexibility, while the CV curves overestimate the test set MSE for higher degrees of flexibility. In the left-hand panel of Figure 5.6, the CV curves have the correct general shape, but they underestimate the true test MSE.

198 5.1 Cross-Validation 183 When we perform cross-validation, our goal might be to determine how well a given statistical learning pro cedure can be expected to perform on independent data; in this case, the actual estimate of the test MSE is of interest. But at other times we are interested only in the location of the . This is because we minimum point in the estimated test MSE curve might be performing cross-validation on a number of statistical learning methods, or on a single method using different levels of flexibility, in order to identify the method that results in the lowest test error. For this purpose, the location of the minimum point in the estimated test MSE curve is important, but the actual value of the estimated test MSE is not. We find in Figure 5.6 that despite the fact that they sometimes underestimate the true test MSE, all of the CV curves come close to identifying the correct level of flexibility—that is, the flexibility level corresponding to the smallest test MSE. 5.1.4 Bias-Variance Trade-Off for k -Fold Cross-Validation We mentioned in Section 5.1.3 that -fold CV with k

199 184 5. Resampling Methods has higher variance than does the mean of many quantities that are not as highly correlated, the test error estimate resulting from LOOCV tends to have higher variance than does th e test error estimate resulting from -fold CV. k To summarize, there is a bias-varia nce trade-off associated with the choice of k in k -fold cross-validation. Typically, given these considerations, -fold cross-validation using one performs =5or k = 10, as these values k k have been shown empirically to yield test error rate estimates that suffer neither from excessively high bias nor from very high variance. 5.1.5 Cross-Validation on Classification Problems In this chapter so far, we have illustrated the use of cross-validation in the Y regression setting where the outcome is quantitative, and so have used MSE to quantify test error. But cross-validation can also be a very useful Y is qualitative. In this setting, approach in the classification setting when cross-validation works just as described earlier in this chapter, except that rather than using MSE to quantify test error, we instead use the number of misclassified observations. For instance, in the classification setting, the LOOCV error rate takes the form n ∑ 1 = (5.4) , Err CV i n ) ( n i =1 where Err -fold CV error rate and validation set error = I ( y k  =ˆ y ). The i i i rates are defined analogously. As an example, we fit various logistic regression models on the two- dimensional classification data displayed in Figure 2.13. In the top-left panel of Figure 5.7, the black solid line shows the estimated decision bound- ary resulting from fitting a standard logistic regression model to this data set. Since this is simulated data, we can compute the true test error rate, which takes a value of 0 . 201 and so is substantially larger than the Bayes . 133. Clearly logistic regression does not have enough flexi- error rate of 0 bility to model the Bayes decision boundary in this setting. We can easily extend logistic regression to obtain a non-linear decision boundary by using polynomial functions of the predictors, as we did in the regression setting in quadratic logistic regression model, Section 3.3.2. For example, we can fit a given by ) ( p 2 2 β (5.5) β + X + X X + β β X . + = β log 3 1 2 1 0 4 2 1 2 p 1 − The top-right panel of Figure 5.7 displays the resulting decision boundary, which is now curved. However, the test error rate has improved only slightly, to 0 . 197. A much larger improvement is apparent in the bottom-left panel

200 5.1 Cross-Validation 185 Degree=2 Degree=1 o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o Degree=4 Degree=3 o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o FIGURE 5.7. Logistic regression fits on the two-dimensional classification data displayed in Figure 2.13. The Bayes decision boundary is represented using a purple dashed line. Estimated decision boundaries from linear, quadratic, cubic and quartic (degrees 1–4) logistic regressions are displayed in black. The test error rates for the four logistic regression fits are respectively 0 . 201 , 0 . 197 , 0 . 160 ,and 0 162 , while the Bayes error rate is 0 . 133 . . of Figure 5.7, in which we have fit a logistic regression model involving cubic polynomials of the predictors. Now the test error rate has decreased to 0 . 160. Going to a quartic polynomial (bottom-right) slightly increases the test error. In practice, for real data, the Bayes decision boundary and the test er- ror rates are unknown. So how might we decide between the four logistic regression models displayed in Figure 5.7? We can use cross-validation in order to make this decision. The left-hand panel of Figure 5.8 displays in

201 186 5. Resampling Methods 0.20 0.20 0.18 0.18 0.16 0.16 Error Rate Error Rate 0.14 0.14 0.12 0.12 246810 0.02 0.01 0.10 0.20 0.50 1.00 0.05 Order of Polynomials Used 1/K FIGURE 5.8. Test error (brown), training error (blue), and 10 -fold CV error (black) on the two-dimensional classification data displayed in Figure 5.7. Left: Logistic regression using polynomial functions of the predictors. The order of the polynomials used is displayed on the x -axis. Right: The KNN classifier with different values of K , the number of neighbors used in the KNN classifier. black the 10-fold CV error rates that result from fitting ten logistic regres- sion models to the data, using polynomial functions of the predictors up to tenth order. The true test errors are shown in brown, and the training errors are shown in blue. As we have seen previously, the training error tends to decrease as the flexibility of the fit increases. (The figure indicates that though the training error rate doesn’t quite decrease monotonically, e model complexity increases.) In it tends to decrease on the whole as th contrast, the test error displays a characteristic U-shape. The 10-fold CV error rate provides a pretty good approximation to the test error rate. While it somewhat underestimates the error rate, it reaches a minimum when fourth-order polynomials are used, which is very close to the min- imum of the test curve, which occurs when third-order polynomials are used. In fact, using fourth-order polynomials would likely lead to good test set performance, as the true test erro r rate is approximately the same for third, fourth, fifth, and sixth-order polynomials. The right-hand panel of Figure 5.8 displays the same three curves us- ing the KNN approach for classification, as a function of the value of K (which in this context indicates the number of neighbors used in the KNN classifier, rather than the number of CV folds used). Again the training error rate declines as the method becomes more flexible, and so we see that the training error rate cannot be used to select the optimal value for K . Though the cross-validation error curve slightly underestimates the test error rate, it takes on a minimum v ery close to the best value for K .

202 5.2 The Bootstrap 187 5.2 The Bootstrap bootstrap is a widely applicable and extremely powerful statistical tool The bootstrap that can be used to quantify the uncerta inty associated with a given esti- mator or statistical learning method. As a simple example, the bootstrap can be used to estimate the standard errors of the coefficients from a linear regression fit. In the specific case of linear regression, this is not particularly useful, since we saw in Chapter 3 that standard statistical software such as R outputs such standard errors automatically. However, the power of the bootstrap lies in the fact that it can be easily applied to a wide range of statistical learning methods, including some for which a measure of vari- ability is otherwise difficult to obtain and is not automatically output by statistical software. In this section we illustrate the bootstrap on a toy example in which we wish to determine the best investment allocation under a simple model. In Section 5.3 we explore the use of the bootstrap to assess the variability associated with the regression co efficients in a linear model fit. Suppose that we wish to invest a fixed sum of money in two financial assets that yield returns of Y , respectively, where X and Y are X and α X , and will random quantities. We will invest a fraction of our money in α in − . Since there is variability associated with invest the remaining 1 Y the returns on these two assets, we wish to choose α to minimize the total risk, or variance, of our investment. In other words, we want to minimize αX +(1 − α ) Y ). One can show that the value that minimizes the risk Var( is given by 2 − σ σ XY Y (5.6) , α = 2 2 σ − 2 + σ σ XY X Y 2 2 where σ ) ,σ ), and =Var( Y X σ =Var( =Cov( X,Y ). XY Y X 2 2 are unknown. We can compute , σ σ ,and σ In reality, the quantities XY Y X 2 2 estimates for these quantities, ˆ σ σ ,andˆ σ ,ˆ ,usingadatasetthat XY Y X X and Y . We can then estimate the value contains past measurements for of that minimizes the variance of our investment using α 2 σ − ˆ σ ˆ XY Y (5.7) . α = ˆ 2 2 +ˆ σ σ σ − 2ˆ ˆ XY Y X α Figure 5.9 illustrates this approach for estimating on a simulated data set. In each panel, we simulated 100 pairs of returns for the investments 2 2 . We used these returns to estimate σ X and Y ,σ ,and σ ,whichwe XY Y X then substituted into (5.7) in order to obtain estimates for α . The value of ˆ resulting from each simulated data set ranges from 0 . 532 to 0 . 657. α It is natural to wish to quantify the accuracy of our estimate of α .To estimate the standard deviation of ˆ α , we repeated the process of simu- lating 100 paired observations of X and Y , and estimating α using (5.7),

203 188 5. Resampling Methods 2 12 1 0 0 Y Y −1 −1 −2 −2 2 −1 0 1 2 −2 −1 0 −2 1 X X 2 2 1 1 0 0 Y Y −1 −1 −2 −2 −3 −3 2 1 0 −1 3 −2 −3 0 −1 2 −2 1 X X Each panel displays FIGURE 5.9. simulated returns for investments 100 X and Y . From left to right and top to bottom, the resulting estimates for α 0 . 576 , 0 . 532 , 0 . 657 ,and 0 . 651 . are α 1,000 times. We thereby obtained 1,000 estimates for ,whichwecancall . The left-hand panel of Figure 5.10 displays a histogram α , ˆ α ˆ ,..., α ˆ 2 1 1 , 000 of the resulting estimates. For these simulations the parameters were set to 2 2 σ 5, and so we know that the true value of =1 =1 . 25, and σ ,σ =0 . XY Y X α is 0 . 6. We indicated this value using a solid vertical line on the histogram. The mean over all 1,000 estimates for α is 000 , 1 ∑ 1 ˆ =0 , α . 5996 ̄ α = r 000 1 , =1 r α =0 . 6, and the standard deviation of the estimates is very close to √ √ 1 , 000 √ ∑ 1 2 √ ̄ . 083 (ˆ α . − α ) =0 r , 1 000 1 − =1 r This gives us a very good idea of the accuracy of ˆ α :SE(ˆ α ) ≈ 0 . 083. So roughly speaking, for a random sample from the population, we would 08, on average. α to differ from α by approximately 0 . expect ˆ In practice, however, the procedure for estimating SE(ˆ α ) outlined above cannot be applied, because for real data we cannot generate new samples from the original population. However, the bootstrap approach allows us to use a computer to emulate the process of obtaining new sample sets,

204 5.2 The Bootstrap 189 0.9 200 200 0.8 150 0.7 150 α 0.6 100 100 0.5 50 50 0.4 0 0 0.3 0.7 0.8 0.9 0.5 0.8 0.7 0.9 0.4 0.3 0.5 0.4 0.6 0.6 Bootstrap True α α A histogram of the estimates of obtained by generating Left: α FIGURE 5.10. 1,000 simulated data sets from the true population. Center: A histogram of the α obtained from 1,000 bootstrap samples from a single data set. estimates of Right: α displayed in the left and center panels are shown as The estimates of boxplots. In each panel, the pink line indicates the true value of . α α so that we can estimate the variability of ˆ without generating additional samples. Rather than repeatedly obtaining independent data sets from the population, we instead obtain distinct data sets by repeatedly sampling observations . from the original data set This approach is illustrated in Figure 5.11 on a simple data set, which we call n = 3 observations. We randomly select n Z , that contains only observations from the data set in order to produce a bootstrap data set, ∗ 1 . The sampling is performed with replacement , which means that the Z replacement same observation can occur more than once in the bootstrap data set. In ∗ 1 this example, Z contains the third observation twice, the first observation once, and no instances of the second observation. Note that if an observation ∗ 1 ,thenbothits X and Y values are included. We can use is contained in Z 1 ∗ 1 ∗ Z α .This , which we call ˆ α to produce a new bootstrap estimate for B , in order to produce B procedure is repeated times for some large value of B ∗ 1 ∗ ∗ 2 ,...,Z ,and B corresponding α ,Z Z different bootstrap data sets, B 1 ∗ ∗ 2 ∗ B . We can compute the standard error of these , ,..., ˆ α ˆ α α estimates, ˆ bootstrap estimates using the formula √ ( ) √ 2 B B √ ∑ ∑ 1 1 ′ √ ∗ ∗ r r α ˆ α )= − (5.8) . (ˆ α ˆ SE B B B − 1 ′ r =1 =1 r This serves as an estimate of the standard error of ˆ α estimated from the original data set. The bootstrap approach is illustrated in the center panel of Figure 5.10, which displays a histogram of 1,000 bootstrap estimates of α , each com- puted using a distinct bootstrap data set. This panel was constructed on the basis of a single data set, and hence could be created using real data.

205 190 5. Resampling Methods Obs XY 2.8 3 5.3 *1 ˆ a 2.4 1 4.3 *1 Z 2.8 5.3 3 XY Obs XY Obs *2 2 2.1 1.1 Z 2.4 4.3 1 *2 ˆ a 2.8 5.3 3 · · 1.1 2.1 2 · · · · 1 4.3 2.4 · · · 3 2.8 5.3 · · · · · · · · · · · · · · · · · · *B · · · · Z · · Original Data (Z) · · · Obs XY B * ˆ a 1.1 2 2.1 2 2.1 1.1 2.4 4.3 1 A graphical illustration of the bootstrap approach on a small FIGURE 5.11. sample containing n =3 observations. Each bootstrap data set contains n obser- vations, sampled with replacement from the original data set. Each bootstrap data set is used to obtain an estimate of α . Note that the histogram looks very similar to the left-hand panel which dis- plays the idealized histogram of the estimates of obtained by generating α 1,000 simulated data sets from the true population. In particular the boot- strap estimate SE(ˆ α . 087, very close to the estimate of 0 . 083 ) from (5.8) is 0 obtained using 1,000 simulated data sets. The right-hand panel displays the information in the center and left panels in a different way, via boxplots of the estimates for obtained by generating 1,000 simulated data sets from α the true population and using the bootstrap approach. Again, the boxplots are quite similar to each other, indicating that the bootstrap approach can be used to effectively estimate the variability associated with ˆ α . 5.3 Lab: Cross-Validation and the Bootstrap In this lab, we explore the resampling techniques covered in this chapter. Some of the commands in this lab may take a while to run on your com- puter.

206 5.3 Lab: Cross-Validation and the Bootstrap 191 5.3.1 The Validation Set Approach We explore the use of the validation set approach in order to estimate the test error rates that result from fitting various linear models on the Auto data set. Before we begin, we use the function in order to set a seed for set.seed() seed R ’s random number generator, so that the reader of this book will obtain precisely the same results as those shown below. It is generally a good idea to set a random seed when performing an analysis such as cross-validation ss, so that the results obtained can that contains an element of randomne be reproduced precisely at a later time. sample() function to split the set of observations We begin by using the sample() into two halves, by selecting a random subset of 196 observations out of the original 392 observations. We refer to these observations as the training set. > library(ISLR) > set.seed(1) > train=sample(392,196) ?sample for details.) (Here we use a shortcut in the sample command; see We then use the subset option in lm() to fit a linear regression using only the observations corresponding to the training set. ∼ horsepower ,data=Auto,subset=train) > lm.fit=lm(mpg We now use the predict() function to estimate the response for all 392 observations, and we use the mean() function to calculate the MSE of the 196 observations in the validation set. Note that the -train index below selects only the observations that are not in the training set. > attach(Auto) > mean((mpg-predict(lm.fit,Auto))[-train]^2) [1] 26.14 E for the linear regression fit is 26 . Therefore, the estimated test MS 14. We can use the poly() function to estimate the test error for the polynomial and cubic regressions. ∼ poly(horsepower ,2),data= Auto, subset=train) > lm.fit2=lm(mpg > mean((mpg-predict(lm. fit2, Auto ))[- train]^2) [1] 19.82 > lm.fit3=lm(mpg ∼ Auto, subset=train) poly(horsepower ,3),data= > mean((mpg-predict(lm. Auto ))[- train]^2) fit3, [1] 19.78 These error rates are 19 . 82 and 19 . 78, respectively. If we choose a different training set instead, then we will obtain somewhat different errors on the validation set. > set.seed(2) > train=sample(392,196) > lm.fit=lm(mpg ∼ horsepower ,subset=train)

207 192 5. Resampling Methods > mean((mpg-predict(lm.fit,Auto))[-train]^2) [1] 23.30 poly(horsepower ,2),data= Auto, subset=train) > lm.fit2=lm(mpg ∼ Auto train]^2) fit2, ))[- > mean((mpg-predict(lm. [1] 18.90 poly(horsepower ,3),data= Auto, subset=train) > lm.fit3=lm(mpg ∼ Auto ))[- train]^2) fit3, > mean((mpg-predict(lm. [1] 19.26 Using this split of the observations into a training set and a validation set, we find that the validation set error rates for the models with linear, quadratic, and cubic terms are 23 30, 18 . 90, and 19 . 26, respectively. . These results are consistent with our previous findings: a model that mpg horsepower performs better than using a quadratic function of predicts horsepower ,andthereis a model that involves only a linear function of little evidence in favor of a model that uses a cubic function of horsepower . 5.3.2 Leave-One-Out Cross-Validation The LOOCV estimate can be automatically computed for any generalized linear model using the glm() and cv.glm() functions. In the lab for Chap- cv.glm() ter 4, we used the glm() function to perform logistic regression by passing family="binomial" glm() to fit a model argument. But if we use in the without passing in the family argument, then it performs linear regression, just like the lm() function. So for instance, > glm.fit=glm(mpg ∼ horsepower ,data=Auto) > coef(glm.fit) (Intercept) horsepower 39.936 -0.158 and > lm.fit=lm(mpg ∼ horsepower ,data=Auto) > coef(lm.fit) (Intercept) horsepower 39.936 -0.158 yield identical linear regression models. In this lab, we will perform linear regression using the glm() function rather than the lm() function because .The cv.glm() function is cv.glm() the latter can be used together with part of the boot library. > library(boot) > glm.fit=glm(mpg ∼ horsepower ,data=Auto) > cv.err=cv.glm(Auto,glm.fit) > cv.err$delta 11 24.23 24.23 The cv.glm() function produces a list with several components. The two numbers in the delta vector contain the cross-va lidation results. In this

208 5.3 Lab: Cross-Validation and the Bootstrap 193 case the numbers are identical (up to two decimal places) and correspond to the LOOCV statistic given in (5.1). Below, we discuss a situation in which the two numbers differ. Our cross-validation estimate for the test 23. . error is approximately 24 easingly complex polynomial fits. We can repeat this procedure for incr for loop for() function to initiate a To automate the process, we use the for() which iteratively fits polynomial regressions for polynomials of order i =1 for loop = 5, computes the associated cross-validation error, and stores it in i to cv.error . We begin by initializing the vector. the th element of the vector i This command will likely take a couple of minutes to run. > cv.error=rep(0,5) > for (i in 1:5){ ∼ poly(horsepower ,i),data=Auto) + glm.fit=glm(mpg + cv.error[i]=cv.glm(Auto,glm.fit)$delta[1] +} > cv.error [1] 24.23 19.25 19.33 19.42 19.03 As in Figure 5.4, we see a sharp drop in the estimated test MSE between the linear and quadratic fits, but then no clear improvement from using higher-order polynomials. 5.3.3 k -Fold Cross-Validation The cv.glm() functioncanalsobeusedtoimplement k -fold CV. Below we Auto data set. We once again set use ,onthe k k = 10, a common choice for a random seed and initialize a vector in which we will store the CV errors corresponding to the polynomial fits of orders one to ten. > set.seed(17) ,10) > cv.error.10=rep(0 > for (i in 1:10){ + glm.fit=glm(mpg poly(horsepower ,i),data=Auto) ∼ + cv.error.10[i]=cv.glm(Auto,glm.fit,K=10)$delta[1] +} > cv.error.10 [1] 24.21 19.19 19.31 19.34 18.88 19.02 18.90 19.71 18.95 19.50 Notice that the computation time is much shorter than that of LOOCV. (In principle, the computation time for LOOCV for a least squares linear model should be faster than for k -fold CV, due to the availability of the cv.glm() function formula (5.2) for LOOCV; however, unfortunately the does not make use of this formula.) We still see little evidence that using cubic or higher-order polynomial terms leads to lower test error than simply using a quadratic fit. delta are We saw in Section 5.3.2 that the two numbers associated with essentially the same when LOOCV is performed. When we instead perform delta differ slightly. The k -fold CV, then the two numbers associated with

209 194 5. Resampling Methods k -fold CV estimate, as in (5.3). The second is a bias- first is the standard corrected version. On this data set, the two estimates are very similar to each other. 5.3.4 The Bootstrap We illustrate the use of the bootstrap in the simple example of Section 5.2, as well as on an example involving estimating the accuracy of the linear regression model on the Auto data set. Estimating the Accuracy of a Statistic of Interest One of the great advantages of the bootstrap approach is that it can be applied in almost all situations. No complicated mathematical calculations R entails only two steps. are required. Performing a bootstrap analysis in First, we must create a function that computes the statistic of interest. boot() function, which is part of the boot library, to Second, we use the boot() perform the bootstrap by repeatedly sampling observations from the data set with replacement. Portfolio data set in the ISLR package is described in Section 5.2. The To illustrate the use of the bootstrap on this data, we must first create alpha.fn() , which takes as input the ( X,Y )dataaswellas a function, a vector indicating which observations should be used to estimate α .The function then outputs the estimate for α based on the selected observations. > alpha.fn=function(data,index){ + X=data$X[index] + Y=data$Y[index] + return((var(Y)-cov(X,Y))/(var(X)+var(Y)-2*cov(X,Y))) +} returns , or outputs, an estimate for This function based on applying α index . For instance, the (5.7) to the observations indexed by the argument following command tells R to estimate α using all 100 observations. > alpha.fn(Portfolio ,1:100) [1] 0.576 The next command uses the sample() function to randomly select 100 ob- servations from the range 1 to 100, with replacement. This is equivalent to constructing a new bootst rap data set and recomputing ˆ α based on the new data set. > set.seed(1) > alpha.fn(Portfolio,sample(100,100,replace=T)) [1] 0.596 We can implement a bootstrap analysis by performing this command many times, recording all of the corresponding estimates for α , and computing

210 5.3 Lab: Cross-Validation and the Bootstrap 195 function automates the resulting standard deviation. However, the boot() boot() =1 , 000 bootstrap estimates for this approach. Below we produce . R α > boot(Portfolio,alpha.fn,R=1000) ORDINARY NONPARAMETRIC BOOTSTRAP Call: boot(data = Portfolio, statistic = alpha.fn, R = 1000) Bootstrap Statistics : original bias std. error t1* 0.5758 -7.315e-05 0.0886 The final output shows that using the original data, ˆ =0 . 5758, and that α α )is0 . 0886. the bootstrap estimate for SE(ˆ Estimating the Accuracy of a Linear Regression Model The bootstrap approach can be used to assess the variability of the coef- ficient estimates and predictions fro m a statistical learning method. Here we use the bootstrap approach in order to assess the variability of the , the intercept and slope terms for the linear regres- and β estimates for β 1 0 sion model that uses to predict mpg in the Auto data set. We horsepower will compare the estimates obtained using the bootstrap to those obtained ˆ ˆ ) described in Section 3.1.2. ) and SE( β using the formulas for SE( β 1 0 We first create a simple function, boot.fn() Auto data , which takes in the set as well as a set of indices for the observations, and returns the intercept and slope estimates for the linear regression model. We then apply this function to the full set of 392 observations in order to compute the esti- β and on the entire data set using the usual linear regression β mates of 1 0 coefficient estimate formulas from C hapter 3. Note that we do not need the { and } at the beginning and end of the function because it is only one line long. > boot.fn=function(data,index) ∼ horsepower ,data= data, subset=index))) + return(coef(lm(mpg > boot.fn(Auto,1:392) (Intercept) horsepower 39.936 - 0.158 The boot.fn() function can also be used in order to create bootstrap esti- mates for the intercept and slope terms by randomly sampling from among the observations with replacement. Here we give two examples. > set.seed(1) > boot.fn(Auto,sample(392,392,replace=T)) (Intercept) horsepower 38.739 -0.148 > boot.fn(Auto,sample(392,392,replace=T)) (Intercept) horsepower 40.038 -0.160

211 196 5. Resampling Methods boot() Next, we use the function to compute the standard errors of 1,000 bootstrap estimates for the intercept and slope terms. > boot(Auto,boot.fn,1000) NONPARAMETRIC BOOTSTRAP ORDINARY Call: boot(data = Auto, statistic = boot.fn, R = 1000) Bootstrap Statistics : original bias std. error t1* 39.936 0.0297 0.8600 t2* -0.158 -0.0003 0.0074 ˆ β )is0 . 86, and that This indicates that the bootstrap estimate for SE( 0 ˆ 0074. As discussed in Section 3.1.2, . )is0 the bootstrap estimate for SE( β 1 standard formulas can be used to compute the standard errors for the regression coefficients in a linear model. These can be obtained using the summary() function. horsepower ,data=Auto))$coef > summary(lm(mpg ∼ Estimate Std. Error t value Pr(>|t|) (Intercept) 39.936 0.71750 55.7 1.22e-187 horsepower -0.158 0.00645 -24.5 7.03e-81 ˆ ˆ β The standard error estimates for β obtained using the formulas and 0 1 . 717 for the intercept and 0 . 0064 for the slope. from Section 3.1.2 are 0 Interestingly, these are somewhat different from the estimates obtained using the bootstrap. Does this indicate a problem with the bootstrap? In fact, it suggests the opposite. Recall that the standard formulas given in Equation 3.8 on page 66 rely on certain assumptions. For example, they 2 ce. We then estimate , the noise varian depend on the unknown parameter σ 2 σ using the RSS. Now although the formula for the standard errors do not 2 does. We see in g correct, the estimate for σ rely on the linear model bein Figure 3.8 on page 91 that there is a non-linear relationship in the data, and 2 σ so the residuals from a linear fit will be inflated, and so will ˆ . Secondly, are the standard formulas assume (somewhat unrealistically) that the x i fixed, and all the variability comes from the variation in the errors  .The i bootstrap approach does not rely on any of these assumptions, and so it is ˆ ˆ and β β likely giving a more accurate estimate of the standard errors of 1 0 summary() function. than is the Below we compute the bootstrap standard error estimates and the stan- dard linear regression estimates that result from fitting the quadratic model to the data. Since this model provides a good fit to the data (Figure 3.8), there is now a better correspondence between the bootstrap estimates and ˆ ˆ ˆ β ), SE( β ). ) and SE( β the standard estimates of SE( 1 0 2

212 5.4 Exercises 197 > boot.fn=function(data,index) + coefficients(lm(mpg ∼ horsepower+I(horsepower ^2),data=data, subset=index)) > set.seed(1) > boot(Auto,boot.fn,1000) NONPARAMETRIC BOOTSTRAP ORDINARY Call: boot(data = Auto, statistic = boot.fn, R = 1000) Bootstrap Statistics : original bias std. error t1* 56.900 6.098e-03 2.0945 t2* -0.466 -1.777e-04 0.0334 t3* 0.001 1.324e-06 0.0001 ∼ horsepower+I(horsepower ^2),data=Auto))$coef > summary(lm(mpg Estimate Std. Error t value Pr(>|t|) (Intercept) 56.9001 1.80043 32 1.7e-109 horsepower -0.4662 0.03112 -15 2.3e-40 I(horsepower ^2) 0.0012 0.00012 10 2.2e-21 5.4 Exercises Conceptual 1. Using basic statistical properties of the variance, as well as single- variable calculus, derive (5.6). In other words, prove that α given by (5.6) does indeed minimize Var( +(1 − α ) Y ). αX 2. We will now derive the probability that a given observation is part of a bootstrap sample. Suppose that we obtain a bootstrap sample from a set of observations. n (a) What is the probability that the first bootstrap observation is not j th observation from the original sample? Justify your the answer. (b) What is the probability that the second bootstrap observation not the j th observation from the original sample? is (c) Argue that the probability that the j th observation is not in the n bootstrap sample is (1 1 /n ) − . (d) When n = 5, what is the probability that the j th observation is in the bootstrap sample? (e) When n = 100, what is the probability that the j th observation is in the bootstrap sample?

213 198 5. Resampling Methods n =10 000, what is the probability that the j th observa- (f) When , tion is in the bootstrap sample? n (g) Create a plot that displays, for each integer value of from 1 000, the probability that the to 100 th observation is in the , j bootstrap sample. Comment on what you observe. (h) We will now investigate numerically the probability that a boot- strap sample of size = 100 contains the j th observation. Here n j = 4. We repeatedly create bootstrap samples, and each time h observation is contained in we record whether or not the fourt the bootstrap sample. > store=rep(NA, 10000) > for(i in 1:10000){ store[i]=sum(sample(1:100, rep=TRUE)==4)>0 } > mean(store) Comment on the results obtained. 3. We now review k -fold cross-validation. (a) Explain how k -fold cross-validation is implemented. (b) What are the advantages and disadvantages of k -fold cross- validation relative to: i. The validation set approach? ii. LOOCV? 4. Suppose that we use some statistical learning method to make a pre- Y for a particular value of the predictor diction for the response . X Carefully describe how we might estimate the standard deviation of our prediction. Applied 5. In Chapter 4, we used logistic regression to predict the probability of default using income and balance on the Default data set. We will now estimate the test error of this logistic regression model using the validation set approach. Do not forget to set a random seed before beginning your analysis. to income and balance (a) Fit a logistic regression model that uses predict default . (b) Using the validation set approach, estimate the test error of this model. In order to do this, you must perform the following steps: i. Split the sample set into a training set and a validation set.

214 5.4 Exercises 199 ii. Fit a multiple logistic regression model using only the train- ing observations. iii. Obtain a prediction of default status for each individual in the validation set by computing the posterior probability of default for that individual, and classifying the individual to category if the posterior probability is greater default the than 0.5. iv. Compute the validation set error, which is the fraction of the observations in the validation set that are misclassified. (c) Repeat the process in (b) three t imes, using three different splits of the observations into a training set and a validation set. Com- ment on the results obtained. (d) Now consider a logistic regression model that predicts the prob- ability of default using income , balance , and a dummy variable for . Estimate the test error for this model using the val- student idation set approach. Comment on whether or not including a leads to a reduction in the test error student dummy variable for rate. 6. We continue to consider the use of a logistic regression model to predict the probability of using income and default on the balance Default data set. In particular, we will now compute estimates for income and balance logistic regression co- the standard errors of the efficients in two different ways: (1) using the bootstrap, and (2) using the standard formula for computing the standard errors in the glm() function. Do not forget to set a random seed before beginning your analysis. and glm() functions, determine the esti- summary() (a) Using the income mated standard errors for the coefficients associated with balance and in a multiple logistic regression model that uses both predictors. (b) Write a function, boot.fn() , that takes as input the Default data set as well as an index of the observations, and that outputs the coefficient estimates for income and balance in the multiple logistic regression model. boot() function together with your boot.fn() function to (c) Use the estimate the standard errors of th e logistic regression coefficients . income and balance for (d) Comment on the estimated standard errors obtained using the glm() function and using your bootstrap function. function can be cv.glm() 7. In Sections 5.3.2 and 5.3.3, we saw that the used in order to compute the LOOCV test error estimate. Alterna- glm() and tively, one could compute those quantities using just the

215 200 5. Resampling Methods functions, and a for loop. You will now take this ap- predict.glm() proach in order to compute the LOOCV error for a simple logistic regression model on the data set. Recall that in the context Weekly of classification problems, the LOOCV error is given in (5.4). Direction using Lag1 (a) Fit a logistic regression model that predicts . and Lag2 using Lag1 Direction (b) Fit a logistic regression model that predicts and Lag2 using all but the first observation . (c) Use the model from (b) to predict the direction of the first obser- vation. You can do this by predicting that the first observation P will go up if ( | 5.Wasthisob- , Lag2 ) > 0 . Direction="Up" Lag1 servation correctly classified? i =1to i = n ,where n is the number of (d) Write a for loop from observations in the data set, that performs each of the following steps: i. Fit a logistic regression model using all but the th obser- i Direction Lag1 and Lag2 . using vation to predict ii. Compute the posterior probability of the market moving up i th observation. for the i th observation in order iii. Use the posterior probability for the to predict whether or not the market moves up. iv. Determine whether or not an error was made in predicting the direction for the i th observation. If an error was made, then indicate this as a 1, and otherwise indicate it as a 0. (e) Take the average of the n numbers obtained in (d)iv in order to obtain the LOOCV estimate for the test error. Comment on the results. 8. We will now perform cross-validation on a simulated data set. (a) Generate a simulated data set as follows: > set.seed(1) > y=rnorm(100) > x=rnorm(100) > y=x-2*x^2+rnorm(100) In this data set, what is n and what is p ? Write out the model used to generate the data in equation form. (b) Create a scatterplot of X against Y . Comment on what you find. (c) Set a random seed, and then compute the LOOCV errors that result from fitting the following four models using least squares:

216 5.4 Exercises 201 + Y β β X +  = i. 0 1 2 β = Y ii. + X + β  X β + 2 1 0 3 2 X  + β + X + β + X β β = Y iii. 2 3 0 1 2 4 3 Y β iv. = X + β + X β + . + X β  β X + 4 2 1 0 3 data.frame() function Note you may find it helpful to use the X and Y . to create a single data set containing both (d) Repeat (c) using another random seed, and report your results. Are your results the same as what you got in (c)? Why? (e) Which of the models in (c) had the smallest LOOCV error? Is this what you expected? Explain your answer. (f) Comment on the statistical significance of the coefficient esti- mates that results from fitting each of the models in (c) using least squares. Do these results agree with the conclusions drawn based on the cross-validation results? housing data set, from the MASS Boston 9. We will now consider the library. (a) Based on this data set, provide an estimate for the population . Call this estimate ˆ μ . medv mean of μ . Interpret this (b) Provide an estimate of the standard error of ˆ result. Hint: We can compute the standard error of the sample mean by dividing the sample standard deviation by the square root of the number of observations. μ using the bootstrap. How (c) Now estimate the standard error of ˆ does this compare to your answer from (b)? (d) Based on your bootstrap estimate from (c), provide a 95 % con- medv . Compare it to the results fidence interval for the mean of obtained using t.test(Boston$medv) . Hint: You can approximate a 95 % confidence interval using the formula [ˆ μ − 2 SE (ˆ μ ) , ˆ μ +2 SE (ˆ μ )] . , for the median (e) Based on this data set, provide an estimate, ˆ μ med value of in the population. medv .Unfor- (f) We now would like to estimate the standard error of ˆ μ med tunately, there is no simple formula for computing the standard error of the median. Instead, estimate the standard error of the median using the bootstrap. Comment on your findings. (g) Based on this data set, provide an estimate for the tenth per- .(You medv in Boston suburbs. Call this quantity ˆ μ centile of 0 1 . quantile() function.) can use the .Com- μ (h) Use the bootstrap to estimate the standard error of ˆ . 1 0 ment on your findings.

217

218 6 Linear Model Selection and Regularization In the regression setting, the standard linear model = β Y + β + X (6.1) + ··· + β X  p 1 1 0 p lationship between a response and Y is commonly used to describe the re ,...,X . We have seen in Chapter 3 that one ,X X a set of variables 2 p 1 typically fits this model using least squares. In the chapters that follow, we consider some approaches for extending the linear model framework. In Chapter 7 we generalize (6.1) in order to accommodate non-linear, but still additive, relationships, while in Chap- ter 8 we consider even more general non -linear models. However, the linear model has distinct advantages in terms of inference and, on real-world prob- lems, is often surprisingly competitive in relation to non-linear methods. Hence, before moving to the non-linear world, we discuss in this chapter some ways in which the simple linear model can be improved, by replacing plain least squares fitting with some alternative fitting procedures. Why might we want to use another fitting procedure instead of least squares? As we will see, alternative fitting procedures can yield better pre- and diction accuracy . model interpretability • Prediction Accuracy : Provided that the true relationship between the response and the predictors is approximately linear, the least squares estimates will have low bias. If n p —that is, if n ,thenumberof observations, is much larger than , the number of variables—then the p least squares estimates tend to also have low variance, and hence will perform well on test observations. However, if n is not much larger 203 An Introduction to Statistical Learning: with Applications in R , G. James et al., 6, Springer Texts in Statistics 103, DOI 10.1007/978-1-4614-7138-7 © Springer Science+Business Media New York 2013

219 204 6. Linear Model Selection and Regularization p , then there can be a lot of variability in the least squares fit, than resulting in overfitting and consequently poor predictions on future p>n , then there observations not used in model training. And if is no longer a unique least squares coefficient estimate: the variance is infinite so the method cannot be used at all. By constraining or shrinking the estimated coefficients, we can often substantially reduce the variance at the cost of a negligible increase in bias. This can lead to substantial improvements in the accuracy with which we can predict the response for observations not used in model training. • Model Interpretability : It is often the case that some or many of the variables used in a multiple regression model are in fact not associ- ated with the response. Including such variables leads to irrelevant unnecessary complexity in the resulting model. By removing these orresponding coefficient estimates variables—that is, by setting the c to zero—we can obtain a model that is more easily interpreted. Now least squares is extremely unlikely to yield any coefficient estimates that are exactly zero. In this chapter, we see some approaches for au- feature selection or variable selection —that is, tomatically performing feature for excluding irrelevant variables from a multiple regression model. selection variable There are many alternatives, both classical and modern, to using least selection squares to fit (6.1). In this chapter, we discuss three important classes of methods. • Subset Selection . This approach involves identifying a subset of the p predictors that we believe to be related to the response. We then fit a model using least squares on the reduced set of variables. Shrinkage p pre- • . This approach involves fitting a model involving all efficients are shrunken towards zero dictors. However, the estimated co relative to the least squares estimates. This shrinkage (also known as regularization ) has the effect of reducing variance. Depending on what type of shrinkage is performed, some of the coefficients may be esti- mated to be exactly zero. Hence, shrinkage methods can also perform variable selection. • Dimension Reduction . This approach involves projecting the p predic- tors into a -dimensional subspace, where M

220 6.1 Subset Selection 205 6.1 Subset Selection ds for selecting subsets of predictors. In this section we consider some metho These include best subset and stepwise model selection procedures. 6.1.1 Best Subset Selection To perform best subset selection , we fit a separate least squares regression best subset predictors. That is, we fit all p models for each possible combination of the p selection ) ( p 1) p ( p − = / 2modelsthatcontain that contain exactly one predictor, all 2 exactly two predictors, and so forth. We then look at all of the resulting best . models, with the goal of identifying the one that is p possibilities best model The problem of selecting the from among the 2 considered by best subset selection is not trivial. This is usually broken up into two stages, as described in Algorithm 6.1. Algorithm 6.1 Best subset selection M , which contains no predictors. This denote the null model 1. Let 0 model simply predicts the sample mean for each observation. k =1 , 2 2. For : ,...p ) ( p k predictors. models that contain exactly (a) Fit all k ( ) p (b) Pick the best among these models, and call it M .Here best k k 2 is defined as having the smallest RSS, or equivalently largest R . ,..., M using cross- 3. Select a single best model from among M p 0 2 . (AIC), BIC, or adjusted R validated prediction error, C p In Algorithm 6.1, Step 2 identifies the best model (on the training data) p for each subset size, in order to reduce the problem from one of 2 possible models to one of + 1 possible models. In Figure 6.1, these models form p the lower frontier depicted in red. Now in order to select a single best model, we must simply choose among p + 1 options. This task must be performed with care, because the these 2 increases RSS of these + 1 models decreases monotonically, and the R p monotonically, as the number of features included in the models increases. Therefore, if we use these statistics to select the best model, then we will always end up with a model involving all of the variables. The problem is 2 training error, indicates a model with a low R that a low RSS or a high whereas we wish to choose a model that has a low test error. (As shown in Chapter 2 in Figures 2.9–2.11, training error tends to be quite a bit smaller than test error, and a low training error by no means guarantees a low test error.) Therefore, in Step 3, we use cross-validated prediction

221 206 6. Linear Model Selection and Regularization 1.0 8e+07 0.8 6e+07 0.6 2 R 4e+07 0.4 0.2 Residual Sum of Squares 2e+07 0.0 10 10 6 8 2 4 6 2 4 8 Number of Predictors Number of Predictors For each possible model containing a subset of the ten predictors FIGURE 6.1. 2 are displayed. The red frontier tracks the R in the Credit data set, the RSS and 2 . Though model for a given number of predictors, according to RSS and best R x -axis ranges from the data set contains only ten predictors, the to 11 ,sinceone 1 of the variables is categorical and takes on three values, leading to the creation of two dummy variables. 2 C . , BIC, or adjusted R , in order to select among M M error, M ,..., 1 0 p p These approaches are discussed in Section 6.1.3. An application of best subset selection is shown in Figure 6.1. Each plotted point corresponds to a least squares regression model fit using a Credit data set, discussed in different subset of the 11 predictors in the Chapter 3. Here the variable ethnicity is a three-level qualitative variable, and so is represented by two dummy vari ables, which are selected separately 2 statistics for each model, as R in this case. We have plotted the RSS and a function of the number of variables. The red curves connect the best 2 models for each model size, according to RSS or R . The figure shows that, as expected, these quantities improve a s the number of variables increases; however, from the three-variable model on, there is little improvement in 2 as a result of including additional predictors. RSS and R Although we have presented best subs et selection here for least squares er types of models, such as logistic regression, the same ideas apply to oth regression. In the case of logistic regression, instead of ordering models by RSS in Step 2 of Algorithm 6.1, we instead use the deviance ,ameasure deviance that plays the role of RSS for a broader class of models. The deviance is negative two times the maximized log-likelihood; the smaller the deviance, the better the fit. While best subset selection is a simple and conceptually appealing ap- proach, it suffers from computational limitations. The number of possible p increases. In general, models that must be considered grows rapidly as p models that involve subsets of = 10, p predictors. So if p there are 2 then there are approximately 1,000 possible models to be considered, and if

222 6.1 Subset Selection 207 = 20, then there are over one million possibilities! Consequently, best sub- p greater than set selection becomes computationally infeasible for values of p odern computers. There are compu- around 40, even with extremely fast m tational shortcuts—so called branch-and-bound techniques—for eliminat- p gets large. They also ing some choices, but these have their limitations as only work for least squares linear regression. We present computationally efficient alternatives to b est subset selection next. 6.1.2 Stepwise Selection For computational reasons, best subset selection cannot be applied with p . Best subset selection may also suffer from statistical problems very large when p is large. The larger the search space, the higher the chance of finding models that look good on the training data, even though they might not have any predictive power on future data. Thus an enormous search space nce of the coefficient estimates. can lead to overfitting and high varia stepwise For both of these reasons, methods, which explore a far more alternatives to best subset selection. restricted set of models, are attractive Forward Stepwise Selection Forward stepwise selection is a computationally efficient alternative to best forward subset selection. While the best subset selection procedure considers all stepwise p selection 2 p predictors, forward step- possible models containing subsets of the wise considers a much smaller set of mo dels. Forward stepwise selection begins with a model containing no predictors, and then adds predictors to the model, one-at-a-time, until all of the predictors are in the model. In particular, at each step the variable that gives the greatest additional improvement to the fit is added to the model. More formally, the forward stepwise selection procedur e is given in Algorithm 6.2. Algorithm 6.2 Forward stepwise selection M denote the null model, which contains no predictors. 1. Let 0 k =0 ,...,p − 1: 2. For p − k models that augment the predictors in M (a) Consider all k with one additional predictor. . best among these p − k models, and call it M (b) Choose the +1 k 2 Here best is defined as having smallest RSS or highest R . using cross- M ,..., 3. Select a single best model from among M 0 p 2 . R (AIC), BIC, or adjusted validated prediction error, C p

223 208 6. Linear Model Selection and Regularization p models, forward Unlike best subset selection, which involved fitting 2 k models stepwise selection involves fitting one null model, along with − p k ,...,p th iteration, for − 1. This amounts to a total of 1 + in the =0 k ∑ 1 − p ( p − k )=1+ p ( p +1) / 2 models. This is a substantial difference: when k =0 p = 20, best subset selection requires fitting 1 , 048 , 576 models, whereas 1 forward stepwise selection requires fitting only 211 models. best model from In Step 2(b) of Algorithm 6.2, we must identify the p among those that augment M − k with one additional predictor. We can k do this by simply choosing the model with the lowest RSS or the highest 2 . However, in Step 3, we must identify the best model among a set of R models with different numbers of variables. This is more challenging, and is discussed in Section 6.1.3. Forward stepwise selection’s computational advantage over best subset selection is clear. Though forward stepwise tends to do well in practice, p mod- it is not guaranteed to find the best possible model out of all 2 p predictors. For instance, suppose that in a els containing subsets of the p = 3 predictors, the best possible one-variable model given data set with , and the best possible two-variable model instead contains X contains X 1 2 X and . Then forward stepwise selection will fail to select the best possible 3 must also contain will contain X M ,so M two-variable model, because 1 2 1 X together with one additional variable. 1 Table 6.1, which shows the first four selected models for best subset Credit data set, illustrates this phe- and forward stepwise selection on the nomenon. Both best subset selection and forward stepwise selection choose income rating student for the best one-variable model and then include and . However, best subset selection re- for the two- and three-variable models places by rating in the four-variable model, while forward stepwise cards rating in its four-variable model. In this example, selection must maintain t much difference between the three- Figure 6.1 indicates that there is no and four-variable models in terms of RSS, so either of the four-variable models will likely be adequate. Forward stepwise selection can be applied even in the high-dimensional setting where n

224 6.1 Subset Selection 209 Best subset Forward stepwise #Variables One rating rating Two , income rating , income rating , income , student rating , income , Three rating student cards income rating , income , , Four limit student , limit , student TABLE 6.1. The first four selected models for best subset selection and forward stepwise selection on the data set. The first three models are identical but Credit the fourth models differ. stepwise selection, it begins with the full least squares model containing p predictors, and then iteratively r emoves the least useful predictor, all one-at-a-time. Details are given in Algorithm 6.3. Algorithm 6.3 Backward stepwise selection M predictors. denote the full model, which contains all 1. Let p p ,..., k p, p − 2. For = 1: 1 (a) Consider all k models that contain all but one of the predictors 1predictors. − k , for a total of M in k .Here best M among these k models, and call it (b) Choose the − k 1 2 best is defined as having smallest RSS or highest R . using cross- ,..., M 3. Select a single best model from among M p 0 2 (AIC), BIC, or adjusted . R C validated prediction error, p Like forward stepwise selection, the b ackward selection approach searches through only 1+ p ( p +1) / 2 models, and so can be applied in settings where 2 p is too large to apply best subset selection. Also like forward stepwise selection, backward stepwise sel best ection is not guaranteed to yield the p predictors. model containing a subset of the n is larger than Backward selection requires that the number of samples the number of variables p (so that the full model can be fit). In contrast, forward stepwise can be used even when n

225 210 6. Linear Model Selection and Regularization Hybrid Approaches The best subset, forward stepwise, a nd backward stepwise selection ap- proaches generally give similar but not identical models. As another al- ternative, hybrid versions of forward and backward stepwise selection are available, in which variables are added to the model sequentially, in analogy to forward selection. However, after adding each new variable, the method may also remove any variables that no longer provide an improvement in the model fit. Such an approach attempts to more closely mimic best sub- set selection while retaining the computational advantages of forward and backward stepwise selection. 6.1.3 Choosing the Optimal Model Best subset selection, forward select ion, and backward selection result in p pre- the creation of a set of models, each of which contains a subset of the dictors. In order to implement these methods, we need a way to determine which of these models is . As we discussed in Section 6.1.1, the model best containing all of the predictors will always have the smallest RSS and the 2 , since these quantities are related to the training error. Instead, largest R we wish to choose a model with a low test error. As is evident here, and as we show in Chapter 2, the training error can be a poor estimate of the test 2 are not suitable for selecting the best model error. Therefore, RSS and R among a collection of models with d ifferent numbers of predictors. In order to select the best model with respect to test error, we need to estimate this test error. There are two common approaches: adjustment 1. We can indirectly estimate test error by making an to the training error to account for the bias due to overfitting. 2. We can directly estimate the test error, using either a validation set approach or a cross-validation approach, as discussed in Chapter 5. We consider both of these approaches below. 2 ,AIC,BIC,andAdjusted R C p We show in Chapter 2 that the training set MSE is generally an under- estimate of the test MSE. (Recall that MSE = RSS /n .) This is because when we fit a model to the training data using least squares, we specifi- ents such that the training RSS (but cally estimate the regression coeffici not the test RSS) is as small as possible. In particular, the training error will decrease as more variables are included in the model, but the test error 2 cannot be used R may not. Therefore, training set RSS and training set to select from among a set of models with different numbers of variables. However, a number of techniques for adjusting the training error for the model size are available. These approaches can be used to select among a set

226 6.1 Subset Selection 211 0.96 30000 30000 0.94 25000 25000 2 0.92 p C BIC 20000 20000 0.90 Adjusted R 0.88 15000 15000 0.86 10000 10000 246810 246810 246810 Number of Predictors Number of Predictors Number of Predictors 2 are shown for the best models of each R C FIGURE 6.2. , BIC, and adjusted p and BIC are data set (the lower frontier in Figure 6.1). Credit size for the C p estimates of test MSE. In the middle plot we see that the BIC estimate of test error shows an increase after four variables are selected. The other two plots are rather flat after four variables are included. of models with different numbers of variables. We now consider four such C approaches: , Bayesian information Akaike information criterion (AIC), p C p 2 (BIC), and adjusted ,BIC,andadjusted criterion . Figure 6.2 displays C R p Akaike 2 for the best model of each size produced by best subset selection on the R information criterion data set. Credit Bayesian estimate predictors, the For a fitted least squares model containing d C p information criterion of test MSE is computed using the equation 2 adjusted R ( ) 1 2 RSS + 2 d ˆ σ (6.2) , = C p n 2 σ where ˆ associated with each  is an estimate of the variance of the error 3 response measurement in (6.1). C Essentially, the statistic adds a penalty p 2 to the training RSS in order to adjust for the fact that the training d of 2 σ ˆ error tends to underestimate the test error. Clearly, the penalty increases as the number of predictors in the model increases; this is intended to adjust for the corresponding decrease in training RSS. Though it is beyond the 2 2 is an unbiased estimate of in σ scope of this book, one can show that if ˆ σ is an unbiased estimate of test MSE. As a consequence, the C (6.2), then p C statistic tends to take on a small value for models with a low test error, p so when determining which of a set of models is best, we choose the model value. In Figure 6.2, C selects the six-variable model with the lowest C p p containing the predictors income , limit , rating , cards , age and student . 3 ′ 2 is sometimes defined as C d .Thisisequivalentto =RSS / ˆ σ Mallow’s +2 C − n p p ′ 2 σ =ˆ ( C ), and so the model with n + C the definition given above in the sense that p p ′ also has smallest C . smallest C p p

227 212 6. Linear Model Selection and Regularization The AIC criterion is defined for a large class of models fit by maximum likelihood. In the case of the model (6.1) with Gaussian errors, maximum likelihood and least squares are the same thing. In this case AIC is given by ) ( 1 2 d , σ ˆ RSS + 2 AIC = 2 σ n ˆ d an additive constant. Hence for least where, for simplicity, we have omitte and AIC are proportional to each other, and so only C squares models, p C is displayed in Figure 6.2. p BIC is derived from a Bayesian point of view, but ends up looking similar (and AIC) as well. For the least squares model with d predictors, the C to p BIC is, up to irrelevant constants, given by ( ) 1 2 RSS + log( n ) d ˆ σ (6.3) . BIC = n C Like , the BIC will tend to take on a small value for a model with a p low test error, and so generally we select the model that has the lowest 2 2 n used by C with a log( σ ˆ ) d σ BIC value. Notice that BIC replaces the 2 d ˆ p n is the number of observations. Since log n> 2 for any term, where 7, n> the BIC statistic generally places a h eavier penalty on models with many . C variables, and hence results in the selection of smaller models than p Credit data set; In Figure 6.2, we see that this is indeed the case for the BIC chooses a model that contains only the four predictors , income , limit cards ,and student . In this case the curves are very flat and so there does not appear to be much difference in accu racy between the four-variable and six-variable models. 2 statistic is another popular approach for selecting among R The adjusted a set of models that contain different numbers of variables. Recall from 2 − / TSS, where TSS = RSS is defined as 1 Chapter 3 that the usual R ∑ 2 y ) is the total sum of squares for the response. Since RSS always − y ( i 2 R decreases as more variables are added to the model, the always increases d variables, as more variables are added. For a least squares model with 2 statistic is calculated as the adjusted R − 1) RSS / ( n − d 2 . (6.4) R Adjusted − =1 ( n − 1) TSS / Unlike C ,AIC,andBIC,forwhicha small value indicates a model with p 2 indicates a model with a value of adjusted large R a low test error, a 2 small test error. Maximizing the adjusted R is equivalent to minimizing RSS . While RSS always decreases as the number of variables in the model − d − 1 n RSS in the d may increase or decrease, due to the presence of increases, − d − n 1 denominator. 2 is that once all of the correct The intuition behind the adjusted R variables have been included in the model, adding additional noise variables

228 6.1 Subset Selection 213 will lead to only a very small decrease in RSS. Since adding noise variables RSS d leads to an increase in , such variables will lead to an increase in , − 1 d − n 2 and consequently a decrease in the adjusted R . Therefore, in theory, the 2 will have only correct variables and R model with the largest adjusted 2 2 no noise variables. Unlike the R statistic R pays statistic, the adjusted aprice for the inclusion of unnecessary variables in the model. Figure 6.2 2 for the Credit data set. Using this statistic results R displays the adjusted in the selection of a model that contains seven variables, adding gender to the model selected by C and AIC. p , AIC, and BIC all have rigorous theoretical justifications that are C p beyond the scope of this book. These justifications rely on asymptotic ar- n is very large). Despite its pop- guments (scenarios where the sample size 2 is not as well R ularity, and even though it is quite intuitive, the adjusted motivated in statistical theory as AIC, BIC, and C . All of these measures p are simple to use and compute. Here we have presented the formulas for in the case of a linear model fit using least squares; C AIC, BIC, and p however, these quantities can also be defined for more general types of models. Validation and Cross-Validation As an alternative to the approaches just discussed, we can directly esti- mate the test error using the validation set and cross-validation methods discussed in Chapter 5. We can compute the validation set error or the cross-validation error for each model under consideration, and then select the model for which the resulting estimated test error is smallest. This pro- 2 ,inthat ,andadjusted R C cedure has an advantage relative to AIC, BIC, p it provides a direct estimate of the tes t error, and makes fewer assumptions about the true underlying model. It can also be used in a wider range of model selection tasks, even in cases wh ere it is hard to pinpoint the model degrees of freedom (e.g. the number of predictors in the model) or hard to 2 . σ estimate the error variance In the past, performing cross-validation was computationally prohibitive , p and/or large n , and so AIC, BIC, C for many problems with large p 2 R and adjusted were more attractive approaches for choosing among a set of models. However, nowadays with fast computers, the computations required to perform cross-validation are hardly ever an issue. Thus, cross- validation is a very attractive approach for selecting from among a number of models under consideration. d , the BIC, validation set errors, and Figure 6.3 displays, as a function of Credit data, for the best d -variable model. cross-validation errors on the The validation errors were calculated by randomly selecting three-quarters of the observations as the training set, and the remainder as the valida- tion set. The cross-validation errors were computed using k =10folds. In this case, the validation and cross-validation methods both result in a

229 214 6. Linear Model Selection and Regularization 220 220 220 0 200 200 20 180 180 180 0 160 160 16 140 140 140 Validation Set Error Square Root of BIC Cross−Validation Error 120 120 120 0 100 10 100 246810 246810 246810 Number of Predictors Number of Predictors Number of Predictors For the Credit data set, three quantities are displayed for the FIGURE 6.3. d predictors, for d ranging from 1 to 11 . The overall best best model containing model, based on each of these quantities, is shown as a blue cross. Left: Square root of BIC. Center: Cross-validation errors. Validation set errors. Right: six-variable model. However, all three approaches suggest that the four-, five-, and six-variable models are roughly equivalent in terms of their test errors. s displayed in the center and right- In fact, the estimated test error curve hand panels of Figure 6.3 are quite flat. While a three-variable model clearly has lower estimated test error than a tw o-variable model, the estimated test errors of the 3- to 11-variable models are quite similar. Furthermore, if we repeated the validation set approach using a different split of the data into a training set and a validation set, or if we repeated cross-validation using a different set of cross-validation folds, then the precise model with the lowest estimated test error would surely change. In this setting, we can select a model using the one-standard-error rule . We first calculate the one- standard- standard error of the estimated test MSE for each model size, and then error select the smallest model for which the estimated test error is within one rule standard error of the lowest point on the curve. The rationale here is that if a set of models appear to be more or less equally good, then we might as well choose the simplest model—that is, the model with the smallest number of predictors. In this case, applying the one-standard-error rule to the validation set or cross-validation approach leads to selection of the three-variable model. 6.2 Shrinkage Methods The subset selection methods described in Section 6.1 involve using least squares to fit a linear model that contains a subset of the predictors. As an alternative, we can fit a model containing all p predictors using a technique that constrains or regularizes the coefficient estimates, or equivalently, that shrinks the coefficient estimates towards zero. It may not be immediately

230 6.2 Shrinkage Methods 215 obvious why such a constraint should improve the fit, but it turns out that shrinking the coefficient estimates ca n significantly reduce their variance. The two best-known techniques for shrinking the regression coefficients . ridge regression towards zero are and the lasso 6.2.1 Ridge Regression Recall from Chapter 3 that the least squares fitting procedure estimates using the values that minimize ,...,β ,β β p 1 0 ⎞ ⎛ 2 p n ∑ ∑ ⎠ ⎝ RSS = − . β − x β y j ij 0 i j =1 i =1 Ridge regression is very similar to least squares, except that the coefficients ridge are estimated by minimizing a slightly different quantity. In particular, the regression R ˆ β ridge regression co efficient estimates are the values that minimize ⎛ ⎞ 2 p p p n ∑ ∑ ∑ ∑ 2 2 ⎠ ⎝ β − x β β − (6.5) + λ λ =RSS+ , β y 0 i ij j j j j =1 =1 i =1 =1 j j where λ ≥ 0isa tuning parameter , to be determined separately. Equa- tuning tion 6.5 trades off two different criteria. As with least squares, ridge regres- parameter sion seeks coefficient estimates that fit the data well, by making the RSS ∑ 2 small. However, the second term, λ shrinkage penalty ,is β , called a j j shrinkage β small when ,...,β shrinking are close to zero, and so it has the effect of penalty p 1 β the estimates of towards zero. The tuning parameter serves to control λ j ms on the regression coefficient esti- the relative impact of these two ter λ = 0, the penalty term has no effect, and ridge regression mates. When λ →∞ , the impact of will produce the least squares estimates. However, as the shrinkage penalty grows, and the ridge regression coefficient estimates will approach zero. Unlike least square s, which generates only one set of co- efficient estimates, ridge regression will produce a different set of coefficient R ˆ λ is critical; . Selecting a good value for , for each value of λ β estimates, λ we defer this discussion to Section 6.2.3, where we use cross-validation. Note that in (6.5), the shrinkage penalty is applied to β ,...,β , but p 1 . We want to shrink the estimated association of β not to the intercept 0 each variable with the response; however, we do not want to shrink the intercept, which is simply a measure of the mean value of the response = 0. If we assume that the variables—that is, x = ... = x = x when 2 i ip 1 i X —have been centered to have mean zero the columns of the data matrix before ridge regression is performed, then the estimated intercept will take ∑ n ˆ y = ̄ y = . /n the form β 0 i i =1

231 216 6. Linear Model Selection and Regularization Income 400 400 Limit Rating 300 300 Student 200 200 100 100 0 0 −100 −100 Standardized Coefficients Standardized Coefficients −300 −300 0.8 0.4 1.0 0.2 0.0 0.6 1e+04 1e−02 1e+02 1e+00 R ˆ ˆ β β / λ 2 2 λ FIGURE 6.4. The standardized ridge regression coefficients are displayed for R ˆ ˆ ‖ ‖ . / ‖ β the ‖ and λ data set, as a function of Credit β 2 2 λ An Application to the Credit Data Credit data In Figure 6.4, the ridge regression coefficient estimates for the set are displayed. In the left-hand panel, each curve corresponds to the t estimate for one of the ten variables, plotted ridge regression coefficien as a function of λ . For example, the black solid line represents the ridge income coefficient, as λ is varied. At the extreme regression estimate for the λ is essentially zero, and so the corresponding left-hand side of the plot, ridge coefficient estimates are the same as the usual least squares esti- mates. But as t estimates shrink towards λ increases, the ridge coefficien is extremely large, then all of λ zero. When the ridge coefficient estimates are basically zero; this corresponds to the null model that contains no pre- income , limit , rating ,and student variables are dictors. In this plot, the displayed in distinct colors, since these variables tend to have by far the largest coefficient estimates. While th e ridge coefficient estimates tend to rating λ increases, individual coefficients, such as decrease in aggregate as and increases. λ income , may occasionally increase as The right-hand panel of Figure 6.4 displays the same ridge coefficient on the x -axis, estimates as the left-hand panel, but instead of displaying λ R ˆ ˆ ˆ β ‖ ,where / ‖ denotes the vector of least squares β ‖ β ‖ we now display 2 2 λ coefficient estimates. The notation ‖ β ‖  denotes the norm (pronounced 2 2 √  norm 2 ∑ p 2 = ‖ β ‖ “ell 2”) of a vector, and is defined as β .Itmeasures j 2 j =1 R ˆ λ increases, the  β the distance of from zero. As β norm of will always 2 λ R ˆ ˆ ‖ β ‖ ‖ / ‖ decrease, and so will β . The latter quantity ranges from 1 (when 2 2 λ λ = 0, in which case the ridge regressi on coefficient estimate is the same norms are the same) to 0  as the least squares estimate, and so their 2 ssion coefficient estimate is a λ = ∞ , in which case the ridge regre (when norm equal to zero). Therefore, we can think of the vector of zeros, with  2 x -axis in the right-hand panel of Figure 6.4 as the amount that the ridge

232 6.2 Shrinkage Methods 217 imates have been shrunken towards zero; a small regression coefficient est value indicates that they have been shrunken very close to zero. t estimates discussed in Chapter 3 The standard least squares coefficien c simply leads to a scaling by a constant scale invariant X : multiplying are j scale /c .Inotherwords, of the least squares coefficient estimates by a factor of 1 invariant ˆ j regardless of how the X th predictor is scaled, will remain the same. In β j j contrast, the ridge regression co efficient estimates can change substantially when multiplying a given predictor by a constant. For instance, consider income variable, which is measured in dollars. One could reasonably the have measured income in thousands of dollars, which would result in a reduction in the observed values of income by a factor of 1,000. Now due to the sum of squared coefficients term in the ridge regression formulation (6.5), such a change in scale will not simply cause the ridge regression income to change by a factor of 1,000. In other coefficient estimate for R ˆ will depend not only on the value of , but also on the λ β words, X j j,λ R ˆ X j scaling of the th predictor. In fact, the value of may even depend β j j,λ on the scaling of the other predictors! Therefore, it is best to apply ridge regression after standardizing the predictors ,usingtheformula x ij √ = (6.6) , ̃ x ij ∑ n 1 2 x x − ( ) j ij =1 i n so that they are all on the same scale. In (6.6), the denominator is the estimated standard deviation of the j th predictor. Consequently, all of the standardized predictors will have a standard deviation of one. As a re- sult the final fit will not depend on the scale on which the predictors are measured. In Figure 6.4, the y -axis displays the standardized ridge regres- sion coefficient estimates —that is, the coefficient estimates that result from performing ridge regression using standardized predictors. Why Does Ridge Regression Improve Over Least Squares? bias-variance Ridge regression’s advantage over least squares is rooted in the .As λ increases, the flexibility of the ridge regression fit decreases, trade-off leading to decreased variance but increased bias. This is illustrated in the p =45 left-hand panel of Figure 6.5, using a simulated data set containing predictors and n = 50 observations. The green curve in the left-hand panel of Figure 6.5 displays the variance of the ridge regression predictions as a function of . At the least squares coefficien t estimates, which correspond λ to ridge regression with λ = 0, the variance is high but there is no bias. But as λ increases, the shrinkage of the rid ge coefficient estimates leads to a substantial reduction in the variance of the predictions, at the expense of a slight increase in bias. Recall that th e test mean squared error (MSE), plot- ted in purple, is a function of the variance plus the squared bias. For values

233 218 6. Linear Model Selection and Regularization 05060 4 02030405060 Mean Squared Error Mean Squared Error 1 0102030 0 0. 8 0.2 1.0 1e−01 1e+03 0.6 0.0 0.4 1e+01 R ˆ ˆ λ / β β 2 2 λ FIGURE 6.5. Squared bias (black), variance (green), and test mean squared error (purple) for the ridge regression predictions on a simulated data set, as a R ˆ ˆ ‖ β . The horizontal dashed lines indicate the minimum ‖ ‖ / function of λ β ‖ and 2 2 λ possible MSE. The purple crosses indicate the ridge regression models for which the MSE is smallest. of λ up to about 10, the variance decreases rapidly, with very little increase in bias, plotted in black. Consequently, the MSE drops considerably as λ increases from 0 to 10. Beyond this po int, the decrease in variance due to increasing λ slows, and the shrinkage on the coefficients causes them to be significantly underestimated, resulting in a large increase in the bias. The minimum MSE is achieved at approximately = 30. Interestingly, because λ of its high variance, the MSE associ ated with the least squares fit, when λ = 0, is almost as high as that of the null model for which all coefficient λ = ∞ . However, for an intermediate value of estimates are zero, when , λ the MSE is considerably lower. The right-hand panel of Figure 6.5 displays the same curves as the left- norm of the ridge regression hand panel, this time plotted against the  2 coefficient estimates divided by the  norm of the least squares estimates. 2 Now as we move from left to right, the fits become more flexible, and so the bias decreases and th e variance increases. In general, in situations where the relationship between the response and the predictors is close to linear, the least squares estimates will have low bias but may have high variance. This means that a small change in the training data can cause a large cha nge in the least squares coefficient estimates. In particular, when the number of variables is almost as large p as the number of observations n , as in the example in Figure 6.5, the least squares estimates will be extremely variable. And if p>n , then the least squares estimates do not even ha ve a unique solution, whereas ridge regression can still perform well by trading off a small increase in bias for a large decrease in variance. Hence, ridg e regression works best in situations where the least squares esti mates have high variance. Ridge regression also has substantial computational advantages over best p models. As we subset selection, which requires searching through 2

234 6.2 Shrinkage Methods 219 p ,suchasearchcan discussed previously, even for moderate values of ,ridge λ be computationally infeasible. In contrast, for any fixed value of regression only fits a single model, and the model-fitting procedure can be performed quite quickly. In fact, one can show that the computations λ , are almost iden- simultaneously for all values of required to solve (6.5), tical to those for fitting a model using least squares. 6.2.2 The Lasso Ridge regression does have one obvious disadvantage. Unlike best subset, forward stepwise, and backward stepwise selection, which will generally select models that involve just a subset of the variables, ridge regression ∑ 2 in (6.5) β predictors in the final model. The penalty λ p will include all j will shrink all of the coefficients towards zero, but it will not set any of them λ = ∞ ). This may not be a problem for prediction exactly to zero (unless accuracy, but it can create a challenge in model interpretation in settings in Credit is quite large. For example, in the which the number of variables p income limit , data set, it appears that the most important variables are , rating ,and student . So we might wish to build a model including just these predictors. However, ridge regression will always generate a model λ will tend to reduce involving all ten predictors. Increasing the value of the magnitudes of the coefficients, but will not result in exclusion of any of the variables. lasso to ridge regression that over- The is a relatively recent alternative lasso L ˆ β , minimize the quantity comes this disadvantage. The lasso coefficients, λ ⎞ ⎛ 2 p p p n ∑ ∑ ∑ ∑ ⎠ ⎝ y + | − β | − λ β . | β β | =RSS+ λ x (6.7) j ij j j 0 i =1 j =1 =1 i j j =1 Comparing (6.7) to (6.5), we see that the lasso and ridge regression have 2 similar formulations. The only difference is that the β term in the ridge j | β regression penalty (6.5) has been replaced by | in the lasso penalty (6.7). j (pronounced “ell 1”) penalty In statistical parlance, the lasso uses an  1 instead of an   norm of a coefficient vector penalty. The β is given by 2 1 ∑ ‖ ‖ β = | β | . 1 j As with ridge regression, the lasso shrinks the coefficient estimates penalty has the effect towards zero. However, in the case of the lasso, the  1 of forcing some of the coefficient estim ates to be exactly equal to zero when the tuning parameter λ is sufficiently large. Hence, much like best subset se- lection, the lasso performs variable selection . As a result, models generated from the lasso are generally much easi er to interpret than those produced by ridge regression. We say that the lasso yields sparse models—that is, sparse models that involve only a subset of the variables. As in ridge regression, selecting a good value of λ for the lasso is critical; we defer this discussion to Section 6.2.3, where we use cross-validation.

235 220 6. Linear Model Selection and Regularization 400 400 0 30 300 200 200 100 100 0 0 0 Income −10 Limit Standardized Coefficients Standardized Coefficients Rating −200 Student −300 100 200 2000 0.0 0.2 0.4 0.6 0.8 1.0 5000 20 500 50 L ˆ ˆ β β / λ 1 1 λ The standardized lasso coefficients on the Credit data set are FIGURE 6.6. L ˆ ˆ ‖ . / ‖ β ‖ λ and shown as a function of β ‖ 1 1 λ As an example, consider the coefficient plots in Figure 6.6, which are gen- erated from applying the lasso to the Credit data set. When λ =0,then λ becomes sufficiently the lasso simply gives the least squares fit, and when large, the lasso gives the null model in which all coefficient estimates equal zero. However, in between these two extremes, the ridge regression and lasso models are quite different from each other. Moving from left to right in the right-hand panel of Figure 6.6, we observe that at first the lasso re- rating predictor. Then student and sults in a model that contains only the limit income . enter the model almost simultaneously, shortly followed by Eventually, the remaining variables enter the model. Hence, depending on the value of λ , the lasso can produce a model involving any number of vari- ables. In contrast, ridge regression will always include all of the variables in the model, although the magnitude of the coefficient estimates will depend on λ . Another Formulation for Ridge Regression and the Lasso One can show that the lasso and ridge regression coefficient estimates solve the problems ⎫ ⎧ ⎞ ⎛ 2 ⎪ ⎪ p p n ⎬ ⎨ ∑ ∑ ∑ ⎝ ⎠ subject to y β | β |≤ x − β s − minimize ij 0 i j j ⎪ ⎪ β ⎭ ⎩ =1 i j =1 =1 j (6.8) and ⎫ ⎧ ⎞ ⎛ 2 ⎪ ⎪ p p n ⎨ ⎬ ∑ ∑ ∑ 2 ⎝ ⎠ subject to minimize y x − ≤ β − β s, β ij i j 0 j ⎪ ⎪ β ⎭ ⎩ =1 i =1 =1 j j (6.9)

236 6.2 Shrinkage Methods 221 ,thereissome respectively. In other words, for every value of λ s such that the Equations (6.7) and (6.8) will give the same lasso coefficient estimates. s such that Equa- there is a corresponding Similarly, for every value of λ tions (6.5) and (6.9) will give the same ridge regression coefficient estimates. = 2, then (6.8) indicates that the lasso coefficient estimates have p When the smallest RSS out of all points that lie within the diamond defined by + | β |≤ s . Similarly, the ridge regression estimates have the smallest | β | 1 2 2 2 β RSS out of all points that lie within the circle defined by + . ≤ s β 2 1 We can think of (6.8) as follows. When we perform the lasso we are trying that lead to the smallest RSS, subject to find the set of coefficient estimates ∑ p | can be. β | to the constraint that there is a for how large s budget j =1 j is extremely large, then this budget is not very restrictive, and so When s the coefficient estimates can be large. In fact, if s is large enough that the least squares solution falls within the budget, then (6.8) will simply yield ∑ p | β must be | s the least squares solution. In contrast, if is small, then j =1 j small in order to avoid violating the budget. Similarly, (6.9) indicates that when we perform ridge regression, w e seek a set of coefficient estimates such that the RSS is as small as possible, subject to the requirement that ∑ p 2 . β not exceed the budget s j j =1 The formulations (6.8) and (6.9) reveal a close connection between the lasso, ridge regression, and best subs et selection. Consider the problem ⎫ ⎧ ⎞ ⎛ 2 ⎪ ⎪ p p n ⎬ ⎨ ∑ ∑ ∑ ⎝ ⎠ subject to β ( β − − β x  =0) I ≤ s. minimize y ij j j i 0 ⎪ ⎪ β ⎭ ⎩ j i j =1 =1 =1 (6.10) β =0,and  = 0) is an indicator variable: it takes on a value of 1 if  β Here ( I j j equals zero otherwise. Then (6.10) amounts to finding a set of coefficient es- timates such that RSS is as small as possible, subject to the constraint that no more than s coefficients can be nonzero. The problem (6.10) is equivalent to best subset selection. Unfortunately, solving (6.10) is computationally ) ( p models con- is large, since it requires considering all p infeasible when s taining s predictors. Therefore, we can interpret ridge regression and the lasso as computationally feasible alternatives to best subset selection that replace the intractable form of the budget in (6.10) with forms that are much easier to solve. Of course, the lasso is much more closely related to best subset selection, since only the lasso performs feature selection for s sufficiently small in (6.8). The Variable Selection Property of the Lasso Why is it that the lasso, unlike ridge regression, results in coefficient estimates that are exactly equal to zero? The formulations (6.8) and (6.9) can be used to shed light on the issue. Figure 6.7 illustrates the situation. ˆ The least squares solution is marked as β , while the blue diamond and

237 222 6. Linear Model Selection and Regularization β β ^ 2 2 ^ β β β β 1 1 FIGURE 6.7. Contours of the error and constraint functions for the lasso (right) . The solid blue areas are the constraint re- and ridge regression (left) 2 2 | , while the red ellipses are the contours of s | β β |≤ s and β ≤ + + gions, | β 1 2 2 1 the RSS. circle represent the lasso and ridge regression constraints in (6.8) and (6.9), respectively. If s is sufficiently large, then the constraint regions will con- ˆ tain β , and so the ridge regression and lasso estimates will be the same as s λ =0 the least squares estimates. (Such a large value of corresponds to in (6.5) and (6.7).) However, in Figure 6.7 the least squares estimates lie outside of the diamond and the circle, and so the least squares estimates are not the same as the lasso and ridge regression estimates. ˆ The ellipses that are centered around β represent regions of constant RSS. In other words, all of the points on a given ellipse share a common value of the RSS. As the ellipses expand away from the least squares co- efficient estimates, the RSS increases . Equations (6.8) and (6.9) indicate that the lasso and ridge regression coefficient estimates are given by the first point at which an ellipse contacts the constraint region. Since ridge regression has a circular constraint with no sharp points, this intersection will not generally occur on an axis, an d so the ridge regression coefficient estimates will be exclusively non-zer o. However, the lasso constraint has corners at each of the axes, and so the ellipse will often intersect the con- straint region at an axis. When this occurs, one of the coefficients will equal zero. In higher dimensions, many of the coefficient estimates may equal zero =0,andsothe simultaneously. In Figure 6.7, the intersection occurs at β 1 . β resulting model will only include 2 In Figure 6.7, we considered the simple case of p =2.When p =3, then the constraint region for ridge r egression becomes a sphere, and the constraint region for the lasso becomes a polyhedron. When p> 3, the

238 6.2 Shrinkage Methods 223 Mean Squared Error Mean Squared Error 0 102030405060 0 102030405060 0.4 0.02 0.0 0.2 0.50 0.6 0.8 1.0 2.00 10.00 50.00 0.10 2 R on Training Data λ FIGURE 6.8. Plots of squared bias (black), variance (green), and test MSE Left: (purple) for the lasso on a simulated data set. Right: Comparison of squared bias, variance and test MSE between lasso (solid) and ridge (dashed). Both are plotted 2 on the training data, as a common form of indexing. The crosses R against their in both plots indicate the lasso model for which the MSE is smallest. constraint for ridge regression becomes a hypersphere, and the constraint for the lasso becomes a polytope. However, the key ideas depicted in Fig- ure 6.7 still hold. In particular, the lasso leads to feature selection when p> 2 due to the sharp corners of the polyhedron or polytope. Comparing the Lasso and Ridge Regression It is clear that the lasso has a major advantage over ridge regression, in that it produces simpler and more interpretable models that involve only a subset of the predictors. However, which method leads to better prediction accuracy? Figure 6.8 displays the variance, squared bias, and test MSE of the lasso applied to the same simulated data as in Figure 6.5. Clearly the λ lasso leads to qualitatively similar behavior to ridge regression, in that as increases, the variance decreases and the bias increases. In the right-hand panel of Figure 6.8, the dotted lines represent the ridge regression fits. 2 on the training data. This is another Here we plot both against their R useful way to index models, and can be used to compare models with different types of regularization, as is the case here. In this example, the most identical biases. However, the lasso and ridge regression result in al variance of ridge regression is slightly lower than the variance of the lasso. Consequently, the minimum MSE of ridge regression is slightly smaller than that of the lasso. However, the data in Figure 6.8 were generated in such a way that all 45 predictors were related to the response—that is, none of the true coefficients equaled zero. The lasso implicitly assumes that a number of the ,...,β β 45 1 coefficients truly equal zero. Conseque ntly, it is not surprising that ridge regression outperforms the lasso in terms of prediction error in this setting. Figure 6.9 illustrates a similar situation, except that now the response is a

239 224 6. Linear Model Selection and Regularization 100 100 80 80 60 60 40 40 Mean Squared Error Mean Squared Error 020 020 0.4 0.7 0.8 0.9 10.00 0.5 0.6 2.00 0.50 0.10 0.02 50.00 1.0 2 on Training Data R λ FIGURE 6.9. Left: Plots of squared bias (black), variance (green), and test MSE (purple) for the lasso. The simulated data is similar to that in Figure 6.8, except Right: Comparison of that now only two predictors are related to the response. squared bias, variance and test MSE between lasso (solid) and ridge (dashed). 2 on the training data, as a common form of Both are plotted against their R indexing. The crosses in both plots indicate the lasso model for which the MSE is smallest. function of only 2 out of 45 predictors. Now the lasso tends to outperform ridge regression in terms of bias, variance, and MSE. These two examples illustrate that neither ridge regression nor the lasso will universally dominate the other. In general, one might expect the lasso to perform better in a setting where a relatively small number of predictors have substantial coefficients, and the remaining predictors have coefficients that are very small or that equal zero. Ridge regression will perform better predictors, all with coefficients of when the response is a function of many roughly equal size. However, the number of predictors that is related to the response is never known apriori for real data sets. A technique such as cross-validation can be used in order to determine which approach is better on a particular data set. As with ridge regression, when the le ast squares estimates have exces- sively high variance, the lasso solution can yield a reduction in variance at the expense of a small increase in bias, and consequently can gener- ate more accurate predictions. Unlike ridge regression, the lasso performs variable selection, and hence results i n models that are easier to interpret. There are very efficient algorithms for fitting both ridge and lasso models; in both cases the entire coefficient paths can be computed with about the same amount of work as a single least squares fit. We will explore this further in the lab at the end of this chapter. A Simple Special Case for Ridge Regression and the Lasso In order to obtain a better intuition about the behavior of ridge regression and the lasso, consider a simple special case with n = p ,and X a diag- onal matrix with 1’s on the diagonal and 0’s in all off-diagonal elements.

240 6.2 Shrinkage Methods 225 To simplify the problem further, assume also that we are performing regres- sion without an intercept. With these assumptions, the usual least squares β problem simplifies to finding ,...,β that minimize 1 p p ∑ 2 y . − (6.11) ( ) β j j =1 j In this case, the least squares solution is given by ˆ = y . β j j β And in this setting, ridge regression amounts to finding ,...,β such that 1 p p p ∑ ∑ 2 2 − β β ) (6.12) + ( y λ j j j j =1 =1 j is minimized, and the lasso amounts to finding the coefficients such that p p ∑ ∑ 2 y β − β | ) (6.13) + λ | ( j j j j =1 j =1 is minimized. One can show that in this setting, the ridge regression esti- mates take the form R ˆ (6.14) = y , / (1 + λ ) β j j and the lasso estimates take the form ⎧ ⎪ y λ/ 2; >λ/ − 2if y j j ⎨ L ˆ β = (6.15) y λ/ + − 2; λ/ 2if y < j j j ⎪ ⎩ 2 λ/ |≤ . y 0if | j Figure 6.10 displays the situation. We can see that ridge regression and the lasso perform two very different types of shrinkage. In ridge regression, each least squares coefficient estimat e is shrunken by the same proportion. In contrast, the lasso shrinks each least squares coefficient towards zero by a constant amount, λ/ 2; the least squares coefficients that are less than λ/ 2 in absolute value are shrunken entirely to zero. The type of shrink- age performed by the lasso in this simple setting (6.15) is known as soft- thresholding . The fact that some lasso coefficients are shrunken entirely to soft- rforms feature selection. zero explains why the lasso pe thresholding In the case of a more general data matrix X , the story is a little more complicated than what is depicted in Figure 6.10, but the main ideas still hold approximately: ridge regression more or less shrinks every dimension of the data by the same proportion, whereas the lasso more or less shrinks all coefficients toward zero by a simila r amount, and sufficiently small co- efficients are shrunken all the way to zero.

241 226 6. Linear Model Selection and Regularization 1.5 1.5 Lasso Ridge Least Squares Least Squares 0.5 0.5 −0.5 −0.5 Coefficient Estimate Coefficient Estimate −1.5 −1.5 −0.5 0.0 −1.5 1.5 0.5 1.0 0.5 −0.5 0.0 −1.5 1.5 1.0 y y j j FIGURE 6.10. The ridge regression and lasso coefficient estimates for a simple Left: = and X a diagonal matrix with n ’s on the diagonal. p The setting with 1 ridge regression coefficient estimates are shrunken proportionally towards zero, relative to the least squares estimates. Right: The lasso coefficient estimates are soft-thresholded towards zero. Bayesian Interpretation for Ridge Regression and the Lasso We now show that one can view ridge regression and the lasso through a Bayesian lens. A Bayesian viewpoint for regression assumes that the β prior distribution, say p ( has some ), where β = coefficient vector β T | ,β Y ,...,β ( ) . The likelihood of the data can be written as ), X,β f β ( p 0 1 X X =( where ,...,X ). Multiplying the prior distribution by the likeli- p 1 posterior distribution , hood gives us (up to a proportionality constant) the posterior which takes the form distribution p ( β | X,Y ) ∝ f ( Y | X,β ) p ( β | X )= f ( Y | X,β ) p ( β ) , where the proportionality above follows from Bayes’ theorem, and the is fixed. equality above follows from the assumption that X We assume the usual linear model, Y = β + X , β + + ... + X β p 1 1 p 0 and suppose that the errors are independent and drawn from a normal dis- ∏ p β )= tribution. Furthermore, assume that p ( g ( β ), for some density j =1 j function . It turns out that ridge regression and the lasso follow naturally g from two special cases of g : • If g is a Gaussian distribution with mean zero and standard deviation a function of λ , then it follows that the posterior mode for β —that posterior is, the most likely value for β , given the data—is given by the ridge mode regression solution. (In fact, the ridge regression solution is also the posterior mean.)

242 6.2 Shrinkage Methods 227 0.7 0.7 0.6 0.6 0.5 0.5 ) ) j j 0.4 0.4 β β ( ( 0.3 0.3 0.2 0.2 0.1 0.1 0.0 0.0 2 1 1 −2 −3 −1 2 3 0 0 −3 −2 −1 3 β β j j Left: Ridge regression is the posterior mode for β FIGURE 6.11. under a Gaus- sian prior. Right: The lasso is the posterior mode for β under a double-exponential prior. • If is a double-exponential (Laplace) distribution with mean zero g λ and scale parameter a function of , then it follows that the posterior mode for β is the lasso solution. (However, the lasso solution is not the posterior mean, and in fact, the posterior mean does not yield a sparse coefficient vector.) The Gaussian and double-exponential priors are displayed in Figure 6.11. Therefore, from a Bayesian viewpoint, ridge regression and the lasso follow directly from assuming the usual linear model with normal errors, together with a simple prior distribution for . Notice that the lasso prior is steeply β peaked at zero, while the Gaussian is flatter and fatter at zero. Hence, the lasso expects a priori that many of the coefficients are (exactly) zero, while andomly distributed about zero. ridge assumes the coefficients are r 6.2.3 Selecting the Tuning Parameter Just as the subset selection approaches considered in Section 6.1 require a method to determine which of the models under consideration is best, implementing ridge regression and the lasso requires a method for selecting a value for the tuning parameter λ in (6.5) and (6.7), or equivalently, the s in (6.9) and (6.8). Cross-validation provides a sim- value of the constraint ple way to tackle this problem. We choose a grid of values, and compute λ the cross-validation error for each value of λ , as described in Chapter 5. We then select the tuning parameter value for which the cross-validation error is smallest. Finally, the model is re-fit using all of the available observations and the selected value of the tuning parameter. λ that results from performing leave- Figure 6.12 displays the choice of data Credit one-out cross-validation on the ridge regression fits from the set. The dashed vertical lines indicate the selected value of λ .Inthiscase the value is relatively small, indicating that the optimal fit only involves a

243 228 6. Linear Model Selection and Regularization 25.6 300 25.4 100 25.2 −100 0 Cross−Validation Error Standardized Coefficients 25.0 −300 5e−01 5e+00 5e−03 5e−02 5e−02 5e+00 5e−03 5e−01 λ λ FIGURE 6.12. Cross-validation errors that result from applying ridge Left: Credit Right: λ . regression to the The coefficient data set with various value of λ . The vertical dashed lines indicate the value of λ estimates as a function of selected by cross-validation. small amount of shrinkage relative to the least squares solution. In addition, the dip is not very pronounced, so there is rather a wide range of values that would give very similar error. In a case like this we might simply use the least squares solution. Figure 6.13 provides an illustration of ten-fold cross-validation applied to the lasso fits on the sparse simulated d ata from Figure 6.9. The left-hand panel of Figure 6.13 displays the cross-validation error, while the right-hand panel displays the coefficient estimat es. The vertical dashed lines indicate the point at which the cross-validation error is smallest. The two colored lines in the right-hand panel of Figure 6.13 represent the two predictors that are related to the response, wh ile the grey lines represent the unre- lated predictors; these are often referred to as signal and noise variables, signal respectively. Not only has the lasso correctly given much larger coeffi- cient estimates to the two signal pred ictors, but also the minimum cross- validation error corresponds to a set of coefficient estimates for which only the signal variables are non-zero. Hence cross-validation together with the lasso has correctly identified the two signal variables in the model, even though this is a challenging setting, with p = 45 variables and only n =50 observations. In contrast, the least squares solution—displayed on the far right of the right-hand panel of Figure 6.13—assigns a large coefficient estimate to only one of the two signal variables. 6.3 Dimension Reduction Methods The methods that we have discussed so far in this chapter have controlled variance in two different ways, either by using a subset of the original vari- ables, or by shrinking their coefficients toward zero. All of these methods

244 6.3 Dimension Reduction Methods 229 1400 1000 600 50 51015 Cross−Validation Error − 200 Standardized Coefficients 0 0.6 0.8 1.0 0.2 0.4 0.0 0.4 0.8 1.0 0.0 0.6 0.2 L L ˆ ˆ ˆ ˆ β β β β / / 1 1 1 1 λ λ : Ten-fold cross-validation MSE for the lasso, applied to Left FIGURE 6.13. The corresponding lasso Right: the sparse simulated data set from Figure 6.9. coefficient estimates are displayed. The vertical dashed lines indicate the lasso fit for which the cross-validation error is smallest. are defined using the original predictors, ,X X ,...,X .Wenowexplore p 2 1 transform the predictors and then fit a least a class of approaches that squares model using the transformed variables. We will refer to these tech- niques as dimension reduction methods. dimension Let ,Z Z ,...,Z of our original represent M

245 230 6. Linear Model Selection and Regularization 35 30 25 20 15 Ad Spending 10 5 0 70 20 50 60 10 40 30 Population The population size ( pop ) and ad spending ( ad )for 100 different FIGURE 6.14. cities are shown as purple circles. The green solid line indicates the first principal component, and the blue dashed line indicates the second principal component. where M ∑ β = (6.18) . θ φ m jm j m =1 Hence (6.17) can be thought of as a special case of the original linear regression model given by (6.1). Dimension reduction serves to constrain coefficients, since now they must take the form (6.18). β the estimated j This constraint on the form of the coeffi cients has the potential to bias the coefficient estimates. How ever, in situations where p is large relative to n , selecting a value of M p can significantly reduce the variance of the fitted  are linearly independent, then (6.18) M p ,andallthe Z coefficients. If = m poses no constraints. In this case, no dimension reduction occurs, and so p fitting (6.17) is equivalent to performing least squares on the original predictors. All dimension reduction methods work in two steps. First, the trans- ,Z ,...,Z are obtained. Second, the model is fit formed predictors Z 1 2 M using these M predictors. However, the choice of Z ,Z ,...,Z , or equiv- M 1 2 alently, the selection of the φ ’s, can be achieved in different ways. In this jm chapter, we will consider two approaches for this task: principal components and partial least squares . 6.3.1 Principal Components Regression (PCA) is a popular approach for deriving Principal components analysis principal a low-dimensional set of features from a large set of variables. PCA is components analysis discussed in greater detail as a tool for unsupervised learning in Chapter 10. Here we describe its use as a dimension reduction technique for regression.

246 6.3 Dimension Reduction Methods 231 An Overview of Principal Components Analysis PCA is a technique for reducing the dimension of a p data matrix X . n × direction of the data is that along which the The first principal component vary the most . For instance, consider Figure 6.14, which shows observations population size ( pop ) in tens of thousands of people, and ad spending for a ) in thousands of dollars, for 100 cities. The green ad particular company ( solid line represents the first principal component direction of the data. We can see by eye that this is the direction along which there is the greatest the 100 observations onto variability in the data. That is, if we projected this line (as shown in the left-hand panel of Figure 6.15), then the resulting projected observations would have the largest possible variance; projecting the observations onto any other line would yield projected observations with lower variance. Projecting a point onto a line simply involves finding the location on the line which is closest to the point. The first principal component is displayed graphically in Figure 6.14, but how can it be summarized mathematically? It is given by the formula . =0 . 839 × ( pop − pop )+0 . 544 (6.19) ( ad − ad ) × Z 1 . . 839 and φ =0 =0 544 are the principal component loadings, Here φ 21 11 pop indicates the which define the direction referred to above. In (6.19), mean of all values in this data set, and ad pop indicates the mean of all ad- vertising spending. The idea is that out of every possible linear combination 2 2 ad such that φ = 1, this particular linear combination pop and φ + of 11 21 yields the highest variance: i.e. this is the linear combination for which )) is maximized. It is necessary to ad − × ( pop − pop )+ φ ad × ( Var( φ 21 11 2 2 consider only linear combinations of the form φ φ = 1, since otherwise + 11 21 we could increase φ φ arbitrarily in order to blow up the variance. and 21 11 In (6.19), the two loadings are both positive and have similar size, and so average of the two variables. is almost an Z 1 Z and pop in ad are vectors of length 100, and so is = 100, Since n 1 (6.19). For instance, z (6.20) . =0 . 839 × ( pop ) − pop )+0 . 544 × ( ad ad − i i i 1 are known as the ,...,z ,and principal component scores z The values of 11 1 n can be seen in the right-hand panel of Figure 6.15. There is also another interpretation for PCA: the first principal compo- nent vector defines the line that is as close as possible to the data. For instance, in Figure 6.14, the first principal component line minimizes the tances between each point and the sum of the squared perpendicular dis line. These distances are plotted as dashed line segments in the left-hand panel of Figure 6.15, in which the crosses represent the projection of each point onto the first principal component line. The first principal component has been chosen so that the pro jected observations are as close as possible to the original observations.

247 232 6. Linear Model Selection and Regularization 10 5 0 Ad Spending −5 5 1015202530 −10 2nd Principal Component 20 50 30 −20 −10 0 20 40 10 1st Principal Component Population A subset of the advertising data. The mean and ad budgets FIGURE 6.15. pop Left: are indicated with a blue circle. The first principal component direction is shown in green. It is the dimension along which the data vary the most, and it also defines the line that is closest to all n of the observations. The distances from each observation to the principal component are represented using the black dashed line The left-hand panel has been pop , ad ) . Right: ( segments. The blue dot represents rotated so that the first principal component direction coincides with the x-axis. In the right-hand panel of Figure 6.15, the left-hand panel has been rotated so that the first principal component direction coincides with the for -axis. It is possible to show that the first principal component score x the i x -direction of the th observation, given in (6.20), is the distance in the he point in the bottom-left corner of i th cross from zero. So for example, t the left-hand panel of Figure 6.15 has a large negative principal component = − 26 . 1, while the point in the top-right corner has a large score, z i 1 z positive score, 7. These scores can be computed directly using =18 . i 1 (6.20). as single- We can think of the values of the principal component Z 1 number summaries of the joint pop ad budgets for each location. In and z this example, if 0, =0 . 839 × ( pop < − pop )+0 . 544 × ) ad ad − ( i i 1 i then this indicates a city with below-average population size and below- average ad spending. A positive score suggests the opposite. How well can a pop and ad ? In this case, Figure 6.14 indicates single number represent both that pop and ad have approximately a linear relationship, and so we might expect that a single-number summary will work well. Figure 6.16 displays . The plots show a strong relationship between versus both pop and ad z 1 i wo features. In other words, the first the first principal component and the t principal component appears to capture most of the information contained and ad predictors. pop in the So far we have concentrated on the first principal component. In gen- eral, one can construct up to p distinct principal components. The second is a linear combination of the variables that is un- principal component Z 2 correlated with Z , and has largest variance subject to this constraint. The 1 second principal component direction is illustrated as a dashed blue line in with Z Figure 6.14. It turns out that the zero correlation condition of Z 1 2

248 6.3 Dimension Reduction Methods 233 60 50 40 Population 30 Ad Spending 5 1015202530 20 −1 −1 0 1 2 3 1 3 −2 0 2 −3 −2 −3 1st Principal Component 1st Principal Component Plots of the first principal component scores z FIGURE 6.16. versus pop and i 1 ad . The relationships are strong. is equivalent to the condition that the direction must be ,or perpendicular perpendicular orthogonal , to the first principal component direction. The second principal orthogonal component is given by the formula Z =0 . 544 × ( pop − pop . − 0 . 839 × ( ad − ad ) ) 2 Since the advertising data has two predictors, the first two principal com- . However, by pop and ad ponents contain all of the information that is in construction, the first component will contain the most information. Con- (the x -axis) versus z sider, for example, the much larger variability of i 1 z (the y -axis) in the right-hand panel of Figure 6.15. The fact that the 2 i second principal component scores are much closer to zero indicates that this component captures far less information. As another illustration, Fig- . There is little relationship between versus pop and ad ure 6.17 displays z 2 i the second principal component and these two predictors, again suggesting that in this case, one only needs the first principal component in order to pop and ad budgets. accurately represent the With two-dimensional data, such as in our advertising example, we can construct at most two principal components. However, if we had other predictors, such as population age, income level, education, and so forth, onstructed. They would successively then additional components could be c maximize variance, subject to the const raint of being uncorrelated with the preceding components. The Principal Components Regression Approach The principal components regression (PCR) approach involves constructing principal , and then using these compo- the first M principal components, Z ,...,Z components 1 M regression nents as the predictors in a linear regression model that is fit using least squares. The key idea is that often a small number of prin- cipal components suffice to explain most of the variability in the data, as well as the relationship with the response. In other words, we assume that ,...,X show the most variation are the direc- the directions in which X p 1 tions that are associated with Y . While this assumption is not guaranteed

249 234 6. Linear Model Selection and Regularization 60 50 40 Population 30 Ad Spending 51015202530 20 −0. 5 −0. 5 −1.0 1.0 1.0 0.0 0.0 −1.0 5 0. 5 0. 2nd Principal Component 2nd Principal Component Plots of the second principal component scores z pop FIGURE 6.17. versus 2 i ad . The relationships are weak. and Squared Bias 150 Test MSE Variance 100 50 Mean Squared Error Mean Squared Error 0 0102030405060 70 0 1 0 1 2 0 2 0 30 0 40 30 40 0 Number of Components Number of Components PCR was applied to two simulated data sets. Left: Simulated FIGURE 6.18. Right: data from Figure 6.8. Simulated data from Figure 6.9. to be true, it often turns out to be a reasonable enough approximation to give good results. If the assumption underlying PCR holds, then fitting a least squares will lead to better results than fitting a least squares ,...,Z Z model to 1 M ,...,X , since most or all of the information in the data that model to X p 1 relates to the response is contained in Z ,...,Z , and by estimating only 1 M  p coefficients we can mitigate overfitti M ng. In the advertising data, the pop and ad , first principal component explains most of the variance in both so a principal component regression that uses this single variable to predict some response of interest, such as , will likely perform quite well. sales Figure 6.18 displays the PCR fits on the simulated data sets from Figures 6.8 and 6.9. Recall that both data sets were generated using n =50 observations and p = 45 predictors. However, while the response in the first data set was a function of all the predictors, the response in the second data set was generated using only two of the predictors. The curves are plotted as a function of M , the number of principal components used as predic- tors in the regression model. As more principal components are used in

250 6.3 Dimension Reduction Methods 235 PCR Ridge Regression and Lasso Squared Bias Test MSE Variance Mean Squared Error Mean Squared Error 0 10203040506070 0 10203040506070 0.4 0 10203040 0.8 1.0 0.0 0.2 0.6 Number of Components Shrinkage Factor FIGURE 6.19. PCR, ridge regression, and the lasso were applied to a simulated X contain all the informa- data set in which the first five principal components of Y tion about the response  ) is shown as . In each panel, the irreducible error Var ( Results for PCR. Right: Results for lasso (solid) a horizontal dashed line. Left: x -axis displays the shrinkage factor of the co- and ridge regression (dotted). The norm of the shrunken coefficient estimates  efficient estimates, defined as the 2 norm of the least squares estimate. divided by the  2 the regression model, the bias decreases, but the variance increases. This results in a typical U-shape for the mean squared error. When = p = 45, M then PCR amounts simply to a least squares fit using all of the original predictors. The figure indicates that performing PCR with an appropriate M can result in a substantial impro vement over least squares, es- choice of pecially in the left-hand panel. However, by examining the ridge regression and lasso results in Figures 6.5, 6.8, and 6.9, we see that PCR does not perform as well as the two shrinkage methods in this example. The relatively worse performance of PCR in Figure 6.18 is a consequence of the fact that the data were generated in such a way that many princi- pal components are required in order to adequately model the response. In contrast, PCR will tend to do well in cases when the first few principal components are sufficient to capture most of the variation in the predictors as well as the relationship with the response. The left-hand panel of Fig- ure 6.19 illustrates the results from another simulated data set designed to be more favorable to PCR. Here the re sponse was generated in such a way that it depends exclusively on the first five principal components. Now the M , the number of principal components used bias drops to zero rapidly as in PCR, increases. The mean squared error displays a clear minimum at M = 5. The right-hand panel of Figure 6.19 displays the results on these data using ridge regression and the lasso. All three methods offer a signif- icant improvement over least squares. However, PCR and ridge regression slightly outperform the lasso. We note that even though PCR provides a simple way to perform regression using M

251 236 6. Linear Model Selection and Regularization Income 400 Limit 80000 Rating 300 Student 200 60000 100 0 40000 −100 Cross−Validation MSE Standardized Coefficients 20000 −300 246810 246810 Number of Components Number of Components Left: PCR standardized coefficient estimates on the Credit data FIGURE 6.20. M The ten-fold cross validation MSE obtained Right: set for different values of . M . using PCR, as a function of p of the original features. For instance, in is a linear combination of all (6.19), Z . Therefore, while was a linear combination of both pop and ad 1 PCR often performs quite well in many practical settings, it does not result in the development of a model that relies upon a small set of the original features. In this sense, PCR is more cl osely related to ridge regression than to the lasso. In fact, one can show that PCR and ridge regression are very closely related. One can even think of ridge regression as a continuous ver- 4 sion of PCR! M In PCR, the number of principal components, , is typically chosen by cross-validation. The results of applying PCR to the Credit data set are shown in Figure 6.20; the right-hand panel displays the cross-validation errors obtained, as a function of M . On these data, the lowest cross- validation error occurs when there are M = 10 components; this corre- sponds to almost no dimension reduction at all, since PCR with =11 M is equivalent to simply performing least squares. When performing PCR, we generally recommend standardizing each predictor, using (6.6), prior to generating the principal components. This standardization ensures that all variables are on the same scale. In the absence of standardization, the high-variance variables will tend to play a larger role in the principal components obtained, and the scale on which the variables are measured will ulti mately have an effect on the final PCR model. However, if the variables are all measured in the same units (say, kilograms, or inches), then one might choose not to standardize them. 4 More details can be found in Section 3.5 of Elements of Statistical Learning by Hastie, Tibshirani, and Friedman.

252 6.3 Dimension Reduction Methods 237 Ad Spending 5 1015202530 30 40 20 60 50 Population For the advertising data, the first PLS direction (solid line) and FIGURE 6.21. first PCR direction (dotted line) are shown. 6.3.2 Partial Least Squares The PCR approach that we just described involves identifying linear combi- , that best represent the predictors X nations, or directions ,...,X .These 1 p directions are identified in an unsupervised way, since the response Y is not used to help determine the principal component directions. That is, the response does not the identification of the principal components. supervise Consequently, PCR suffers from a drawback: there is no guarantee that the ictors will also be the best directions directions that best explain the pred to use for predicting the response. Unsupervised methods are discussed further in Chapter 10. We now present partial least squares (PLS), a supervised alternative to partial least PCR. Like PCR, PLS is a dimension reduction method, which first identifies squares a new set of features Z ,...,Z that are linear combinations of the original M 1 el via least squares using these M new features, and then fits a linear mod features. But unlike PCR, PLS identifies these new features in a supervised way—that is, it makes use of the response Y in order to identify new features that not only approximate the old features well, but also that are . Roughly speaking, the PLS approach attempts to relatedtotheresponse find directions that help explain both the response and the predictors. rection is computed. After stan- We now describe how the first PLS di by setting dardizing the Z p predictors, PLS comput es the first direction 1 φ each rom the simple linear regression in (6.16) equal to the coefficient f 1 j . One can show that this coefficient is proportional to the cor- onto X of Y j ∑ p and X relation between Y Z = . Hence, in computing ,PLS φ X 1 j j 1 j =1 j places the highest weight on the variables that are most strongly related to the response. Figure 6.21 displays an example of PLS on the advertising data. The solid green line indicates the first PLS direction, while the dotted line shows the first principal component direction. PLS has chosen a direction that has less ad dimension per unit change in the pop dimension, relative change in the

253 238 6. Linear Model Selection and Regularization pop to PCA. This suggests that is more highly correlated with the response than is . The PLS direction does not fit the predictors as closely as does ad PCA, but it does a better job explaining the response. To identify the second PLS direction we first adjust each of the variables Z , by regressing each variable on . These resid- residuals and taking Z for 1 1 uals can be interpreted as the remai ning information that has not been using this or- explained by the first PLS direction. We then compute Z 2 data in exactly the same fashion as Z thogonalized was computed based 1 on the original data. This iterative approach can be repeated M times to . Finally, at the end of this ,...,Z Z identify multiple PLS components 1 M procedure, we use least square s to fit a linear model to predict Y using ,...,Z in exactly the same fashion as for PCR. Z 1 M As with PCR, the number M of partial least squares directions used in PLS is a tuning parameter that is typically chosen by cross-validation. We generally standardize the predictors and response before performing PLS. PLS is popular in the field of chemometrics, where many variables arise from digitized spectrometry signals. I n practice it often performs no better than ridge regression or PCR. While the supervised dimension reduction of PLS can reduce bias, it also has the potential to increase variance, so that the overall benefit of PLS relative to PCR is a wash. 6.4 Considerations in High Dimensions 6.4.1 High-Dimensional Data Most traditional statistical techniqu es for regression and classification are low-dimensional setting in which intended for the , the number of ob- n low- servations, is much greater than p , the number of features. This is due in dimensional part to the fact that throughout most of the field’s history, the bulk of sci- entific problems requiring the use of statistics have been low-dimensional. For instance, consider the task of dev eloping a model to predict a patient’s blood pressure on the basis of his or her age, gender, and body mass index (BMI). There are three predictors, or four if an intercept is included in the model, and perhaps several thousand patients for whom blood pressure n p , and so the problem and age, gender, and BMI are available. Hence is low-dimensional. (By dimension here we are referring to the size of .) p In the past 20 years, new technologies have changed the way that data are collected in fields as diverse as fina nce, marketing, and medicine. It is now commonplace to collect an almost unlimited number of feature mea- surements ( very large). While p can be extremely large, the number of p observations n is often limited due to cost, sample availability, or other considerations. Two ex amples are as follows: 1. Rather than predicting blood pressure on the basis of just age, gen- der, and BMI, one might also collect measurements for half a million

254 6.4 Considerations in High Dimensions 239 (SNPs; these are individual DNA single nucleotide polymorphisms mutations that are relatively common in the population) for inclu- ≈ p ≈ 500 , 000. sion in the predictive model. Then n 200 and 2. A marketing analyst interested in understanding people’s online shop- ping patterns could treat as features all of the search terms entered by users of a search engine. This is sometimes known as the “bag-of- words” model. The same researcher might have access to the search histories of only a few hundred or a few thousand search engine users who have consented to share their information with the researcher. For a given user, each of the p search terms is scored present (0) or absent (1), creating a large binary feature vector. Then ≈ 1 , 000 n and is much larger. p Data sets containing more features than observations are often referred to as . Classical approaches such as least squares linear high-dimensional high- regression are not appropriate in this setting. Many of the issues that arise dimensional in the analysis of high-dimensional data were discussed earlier in this book, since they apply also when n>p : these include the role of the bias-variance trade-off and the danger of overfitting. Though these issues are always rele- vant, they can become particularly important when the number of features is very large relative to the number of observations. We have defined the as the case where the num- high-dimensional setting p n .Butthecon- ber of features is larger than the number of observations siderations that we will now discuss certainly also apply if p is slightly n smaller than , and are best always kept in mind when performing super- vised learning. 6.4.2 What Goes Wrong in High Dimensions? In order to illustrate the need for extra care and specialized techniques for regression and classification when p>n , we begin by examining what can go wrong if we apply a statistical technique not intended for the high- dimensional setting. For this purpose, we examine least squares regression. But the same concepts apply to logistic regression, linear discriminant anal- ysis, and other classical statistical approaches. When the number of features p is as large as, or larger than, the number of observations n , least squares as described in Chapter 3 cannot (or rather, should not ) be performed. The reason is simple: regardless of whether or not there truly is a relationship between the features and the response, least squares will yield a set of coefficient estimates that result in a perfect fit to the data, such that the residuals are zero. An example is shown in Figure 6.22 with p = 1 feature (plus an intercept) in two cases: when there are 20 observations, and when there are only two observations. When there are 20 observations, n>p and the least

255 240 6. Linear Model Selection and Regularization 10 10 5 5 Y Y 0 0 −5 −5 −10 −10 0.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.0 0.5 −1.5 −1.0 −0.5 X X Left: Least squares regression in the low-dimensional setting. FIGURE 6.22. Least squares regression with =2 observations and two parameters to be Right: n estimated (an intercept and a coefficient). squares regression line does not perfectly fit the data; instead, the regression line seeks to approximate the 20 observations as well as possible. On the other hand, when there are only two observations, then regardless of the values of those observations, the regression line will fit the data exactly. This is problematic because this perfect fit will almost certainly lead to overfitting of the data. In other words, though it is possible to perfectly fit the training data in the high-dimensional setting, the resulting linear model will perform extremely poorly on an independent test set, and therefore does not constitute a useful model. In fact, we can see that this happened in Figure 6.22: the least squares line obtained in the right-hand panel will perform very poorly on a test set comprised of the observations in the left- hand panel. The problem is simple: when p ≈ n , a simple least or p>n and hence overfits the data. flexible squares regression line is too Figure 6.23 further illustrates the risk of carelessly applying least squares p is large. Data were simulated with n =20 when the number of features observations, and regression was performed with between 1 and 20 features, each of which was completely unrelat ed to the response. As shown in the 2 increases to 1 as the number of features included in the figure, the model R model increases, and correspondingly the training set MSE decreases to 0 even though the features are completely as the number of features increases, unrelatedtotheresponse . On the other hand, the MSE on an independent test set becomes extremely large as the number of features included in the model increases, because including the additional predictors leads to a vast increase in the variance of the coeffici ent estimates. Looking at the test set MSE, it is clear that the best model contains at most a few variables. 2 or the training set However, someone who carelessly examines only the R MSE might erroneously conclude that the model with the greatest number of variables is best. This indicates the importance of applying extra care

256 6.4 Considerations in High Dimensions 241 1.0 0.8 0.8 500 0.6 0.6 2 50 R 0.4 0.4 Test MSE Training MSE 5 0.2 0.2 1 0.0 15 5 15 15 10 5 10 5 10 Number of Variables Number of Variables Number of Variables On a simulated example with n =20 training observations, FIGURE 6.23. features that are completely unrelated to the outcome are added to the model. 2 The training increases to 1 as more features are included. Center: R The Left: set MSE decreases to 0 as more features are included. Right: The test set MSE increases as more features are included. when analyzing data sets with a large number of variables, and of always evaluating model performance on an independent test set. In Section 6.1.3, we saw a number of approaches for adjusting the training 2 in order to account for the number of variables used to fit set RSS or R , AIC, and BIC approaches C a least squares model. Unfortunately, the p 2 are not appropriate in the high-dimensional setting, because estimating ˆ σ 2 σ is problematic. (For instance, the formula for ˆ from Chapter 3 yields an 2 = 0 in this setting.) Similarly, problems arise in the application σ estimate ˆ 2 of adjusted R in the high-dimensional setting, since one can easily obtain 2 a model with an adjusted R value of 1. Clearly, alternative approaches that are better-suited to the high-dimensional setting are required. 6.4.3 Regression in High Dimensions It turns out that many of the methods seen in this chapter for fitting least squares models, such as forward stepwise selection, ridge less flexible regression, the lasso, and principal components regression, are particularly useful for performing regression in the high-dimensional setting. Essentially, these approaches avoid overfitting by using a less flexible fitting approach than least squares. Figure 6.24 illustrates the performance of the lasso in a simple simulated example. There are = 20, 50, or 2 , 000 features, of which 20 are truly p n = 100 training associated with the outcome. The lasso was performed on observations, and the mean squared error was evaluated on an independent test set. As the number of features increases, the test set error increases. When p = 20, the lowest validation set error was achieved when λ in (6.7) was small; however, when was larger then the lowest validation p set error was achieved using a larger value of λ . In each boxplot, rather than reporting the values of λ used, the degrees of freedom of the resulting

257 242 6. Linear Model Selection and Regularization p =20 p =50 p = 2000 012345 012345 012345 1 51 1 70 111 16 28 1 21 Degrees of Freedom Degrees of Freedom Degrees of Freedom The lasso was performed with FIGURE 6.24. = 100 observations and three n p p features, 20 were associated with values of , the number of features. Of the the response. The boxplots show the test MSEs that result using three different λ values of the tuning parameter in (6.7). For ease of interpretation, rather than reporting λ ,the degrees of freedom are reported; for the lasso this turns out to be simply the number of estimated non-zero coefficients. When p =20 ,the lowest test MSE was obtained with the smallest amount of regularization. When p , the lowest test MSE was achieved when there is a substantial amount =50 p , 000 the lasso performed poorly regardless of the of regularization. When =2 amount of regularization, due to the fact that only 20 of the 2,000 features truly are associated with the outcome. lasso solution is displayed; this is simply the number of non-zero coefficient estimates in the lasso solution, and is a measure of the flexibility of the lasso fit. Figure 6.24 highlights three important points: (1) regularization or shrinkage plays a key role in high-dimensional problems, (2) appropriate tuning parameter selection is crucial for good predictive performance, and (3) the test error tends to increase as the dimensionality of the problem (i.e. the number of features or predictors) increases, unless the additional features are truly associated with the response. The third point above is in fact a key principle in the analysis of high- dimensional data, which is known as the curse of dimensionality .Onemight curse of di- think that as the number of features used to fit a model increases, the mensionality quality of the fitted model will increase as well. However, comparing the left-hand and right-hand panels in Figure 6.24, we see that this is not necessarily the case: in this exampl e, the test set MSE almost doubles as p increases from 20 to 2,000. In general, adding additional signal features that are truly associated with the response will improve the fitted model , in the sense of leading to a reduction in test set error. However, adding noise features that are not truly associated with the response will lead to a deterioration in the fitted model, a nd consequently an increased test set error. This is because noise featur es increase the dimensionality of the

258 6.4 Considerations in High Dimensions 243 problem, exacerbating the risk of overfitting (since noise features may be assigned nonzero coefficients due to ch ance associations with the response on the training set) without any potential upside in terms of improved test set error. Thus, we see that new technologies that allow for the collection of measurements for thousands or millions of features are a double-edged ctive models if these features are in sword: they can lead to improved predi fact relevant to the problem at hand, but will lead to worse results if the features are not relevant. Even if they are relevant, the variance incurred in fitting their coefficients may outweigh the reduction in bias that they bring. 6.4.4 Interpreting Results in High Dimensions When we perform the lasso, ridge regression, or other regression proce- dures in the high-dimensional setting, we must be quite cautious in the way multi- that we report the results obtained. In Chapter 3, we learned about , the concept that the variables in a regression might be corre- collinearity lated with each other. In the high-dimensional setting, the multicollinearity problem is extreme: any variable in the model can be written as a linear combination of all of the other variables in the model. Essentially, this means that we can never know exactly which variables (if any) truly are best coefficients predictive of the outcome, and we can never identify the for use in the regression. At most, we can hope to assign large regression coefficients to variables that are correlated with the variables that truly are predictive of the outcome. For instance, suppose that we are trying to predict blood pressure on the basis of half a million SNPs, and that forward stepwise selection indicates that 17 of those SNPs lead to a good predictive model on the training data. It would be incorrect to conclude that these 17 SNPs predict blood pressure more effectively than the other SNPs not included in the model. There are likely to be many sets of 17 SNPs that would predict blood pressure just as well as the selected model. If we were to obtain an independent data set ion on that data set, we would likely and perform forward stepwise select obtain a model containing a different, and perhaps even non-overlapping, set of SNPs. This does not detract fr om the value of the model obtained— for instance, the model might turn out to be very effective in predicting blood pressure on an independent set of patients, and might be clinically useful for physicians. But we must be careful not to overstate the results obtained, and to make it clear that what we have identified is simply one of many possible models for predicting blood pressure, and that it must be further validated on independent data sets. It is also important to be particularly careful in reporting errors and measures of model fit in the high-dimensional setting. We have seen that when p>n , it is easy to obtain a useless model that has zero residu- 2 als. Therefore, one should never use sum of squared errors, p-values, R

259 244 6. Linear Model Selection and Regularization statistics, or other traditional measures of model fit on the training data as evidence of a good model fit in the high-dimensional setting. For instance, 2 as we saw in Figure 6.23, one can easily obtain a model with R =1when . Reporting this fact might mislead others into thinking that a sta- p>n tistically valid and useful model has been obtained, whereas in fact this provides absolutely no evidence of a compelling model. It is important to instead report results on an independent test set, or cross-validation errors. 2 on an independent test set is a valid measure R For instance, the MSE or of model fit, but the MSE on the training set certainly is not. 6.5 Lab 1: Subset Selection Methods 6.5.1 Best Subset Selection Here we apply the best subset selection approach to the data. We Hitters on the basis of various statistics Salary wish to predict a baseball player’s associated with performance in the previous year. First of all, we note that the variable is missing for some of the Salary is.na() function can be used to identify the missing observa- players. The is.na() tions. It returns a vector of the same length as the input vector, with a TRUE for any elements that are missing, and a FALSE for non-missing elements. function can then be used to count all of the missing elements. sum() The sum() > library(ISLR) > fix(Hitters) > names(Hitters) [1] "AtBat" "Hits" "HmRun" "Runs" "RBI" [6] "Walks" "Years" "CAtBat" "CHits" "CHmRun" [11] "CRuns" "CRBI" "CWalks" "League" "Division" [16] "PutOuts" "Assists" "Errors" "Salary" "NewLeague" > dim(Hitters) [1] 322 20 > sum(is.na(Hitters$Salary)) [1] 59 Hence we see that Salary is missing for 59 players. The na.omit() function removes all of the rows that have missing values in any variable. > Hitters=na.omit(Hitters) > dim(Hitters) [1] 263 20 > sum(is.na(Hitters)) [1] 0 The regsubsets() function (part of the leaps library) performs best sub- regsubsets() set selection by identifying the best model that contains a given number of predictors, where best is quantified using RSS. The syntax is the same lm() .The summary() command outputs the best set of variables for as for each model size.

260 6.5 Lab 1: Subset Selection Methods 245 > library(leaps) > regfit.full=regsubsets(Salary ∼ .,Hitters) > summary(regfit.full) Subset selection object Call: regsubsets.formula(Salary ., Hitters) ∼ 19 Variables (and intercept) ... 1 subsets of each size up to 8 Selection Algorithm: exhaustive AtBat Hits HmRun Runs RBI Walks Years CAtBat CHits 1 (1)"" "" "" "" """" "" "" "" 2 (1)"" "*" "" "" """" "" "" "" 3 (1)"" "*" "" "" """" "" "" "" 4 (1)"" "*" "" "" """" "" "" "" 5 ( 1 ) "*" "*" " " " " " " " " " " " " " " 6 ( 1 ) "*" "*" " " " " " " "*" " " " " " " 7 (1)"" "*" "" "" """*" "" "*" "*" 8 ( 1 ) "*" "*" " " " " " " "*" " " " " " " CHmRun CRuns CRBI CWalks LeagueN DivisionW PutOuts 1 (1)"" "" "*" "" "" "" "" 2 (1)"" "" "*" "" "" "" "" 3 (1)"" "" "*" "" "" "" "*" 4 (1)"" "" "*" "" "" "*" "*" 5 (1)"" "" "*" "" "" "*" "*" 6 (1)"" "" "*" "" "" "*" "*" 7 (1)"*" "" "" "" "" "*" "*" 8 ( 1 ) "*" "*" " " "*" " " "*" "*" Assists Errors NewLeagueN 1 (1)"" "" "" 2 (1)"" "" "" 3 (1)"" "" "" 4 (1)"" "" "" 5 (1)"" "" "" 6 (1)"" "" "" 7 (1)"" "" "" 8 (1)"" "" "" An asterisk indicates that a given variable is included in the corresponding model. For instance, this output indicates that the best two-variable model Hits and CRBI . By default, regsubsets() only reports results contains only up to the best eight-variable model. But the nvmax option can be used in order to return as many variables as are desired. Here we fit up to a 19-variable model. > regfit.full=regsubsets(Salary ∼ .,data=Hitters , nvmax=19) > reg.summary=summary(regfit.full) 2 2 function also returns R , , RSS, adjusted R summary() The C ,andBIC. p We can examine these to try to select the best overall model. > names(reg.summary) [1] "which" "rsq" "rss" "adjr2" "cp" "bic" [7] "outmat" "obj"

261 246 6. Linear Model Selection and Regularization 2 For instance, we see that the statistic increases from 32 %, when only R one variable is included in the model, to almost 55 %, when all variables 2 R are included. As expected, the statistic increases monotonically as more variables are included. > reg.summary$rsq [1] 0.321 0.425 0.451 0.475 0.491 0.509 0.514 0.529 0.535 [10] 0.540 0.543 0.544 0.544 0.545 0.545 0.546 0.546 0.546 [19] 0.546 2 R , C , and BIC for all of the models at once will Plotting RSS, adjusted p help us decide which model to select. Note the option tells R to type="l" connect the plotted points with lines. > par(mfrow=c(2,2)) > plot(reg.summary$rss ,xlab="Number of Variables",ylab="RSS", type="l") > plot(reg.summary$adjr2 ,xlab="Number of Variables", ylab="Adjusted RSq",type="l") points() command works like the plot() command, except that it The points() puts points on a plot that has already been created, instead of creating a which.max() function can be used to identify the location of new plot. The the maximum point of a vector. We will now plot a red dot to indicate the 2 R model with the largest adjusted statistic. > which.max(reg. summary$adjr2) [1] 11 > points(11,reg. =20) summary$adjr2[11], col="red",cex=2,pch In a similar fashion we can plot the C and BIC statistics, and indicate the p models with the smallest statistic using which.min() . which.min() > plot(reg.summary$cp ,xlab="Number of Variables",ylab="Cp", type=’l’) > which.min(reg.summary$cp ) [1] 10 > points(10,reg.summary$cp [10],col="red",cex=2,pch=20) > which.min(reg.summary$bic) [1] 6 > plot(reg.summary$bic ,xlab="Number of Variables",ylab="BIC", type=’l’) > points(6,reg.summary$bic [6],col="red",cex=2,pch=20) The regsubsets() function has a built-in plot() command which can be used to display the selected variables for the best model with a given 2 , adjusted R ,or C number of predictors, ranked according to the BIC, p AIC. To find out more about this function, type ?plot.regsubsets . > plot(regfit.full,scale="r2") > plot(regfit.full,scale="adjr2") > plot(regfit.full,scale="Cp") > plot(regfit.full,scale="bic")

262 6.5 Lab 1: Subset Selection Methods 247 The top row of each plot contains a black square for each variable selected according to the optimal model associat ed with that statistic. For instance, we see that several models share a BIC close to − 150. However, the model AtBat , with the lowest BIC is the six-variable model that contains only , Walks , CRBI , DivisionW ,and PutOuts . We can use the Hits function coef() to see the coefficient estimates associated with this model. > coef(regfit.full,6) (Intercept) AtBat Hits Walks CRBI 91.512 -1.869 7.604 3.698 0.643 DivisionW PutOuts -122.952 0.264 6.5.2 Forward and Backward Stepwise Selection regsubsets() We can also use the function to perform forward stepwise or backward stepwise selection, using the argument method="forward" or method="backward" . > regfit.fwd=regsubsets (Salary ∼ .,data=Hi tters , nvmax=19, method="forward") > summary(regfit.fwd) ∼ tters , nvmax=19, > regfit.bwd=regsubsets (Salary .,data=Hi method="backward") > summary(regfit.bwd) For instance, we see that using forward stepwise selection, the best one- CRBI , and the best two-variable model ad- variable model contains only ditionally includes Hits . For this data, the best one-variable through six- variable models are each identical for best subset and forward selection. However, the best seven-variable models identified by forward stepwise se- lection, backward stepwise selection, and best subset selection are different. > coef(regfit.full,7) (Intercept) Hits Walks CAtBat CHits 79.451 1.283 3.227 -0.375 1.496 CHmRun DivisionW PutOuts 1.442 -129.987 0.237 > coef(regfit.fwd,7) (Intercept) AtBat Hits Walks CRBI 109.787 -1.959 7.450 4.913 0.854 CWalks DivisionW PutOuts -0.305 -127.122 0.253 > coef(regfit.bwd,7) (Intercept) AtBat Hits Walks CRuns 105.649 -1.976 6.757 6.056 1.129 CWalks DivisionW PutOuts -0.716 -116.169 0.303

263 248 6. Linear Model Selection and Regularization 6.5.3 Choosing Among Models Using the Validation Set Approach and Cross-Validation We just saw that it is possible to choose among a set of models of different 2 sizes using C R . We will now consider how to do this ,BIC,andadjusted p using the validation set and cross-validation approaches. In order for these approaches to yield accurate estimates of the test to perform all aspects of error, we must use only the training observations model-fitting—including variable selection. Therefore, the determination of which model of a given size is best must be made using only the training observations . This point is subtle but important. If the full data set is used to perform the best subset selection step, the validation set errors and cross-validation errors that we obtain will not be accurate estimates of the test error. In order to use the validation set approach, we begin by splitting the observations into a training set and a test set. We do this by creating train ,ofelementsequalto TRUE if the corresponding a random vector, observation is in the training set, and FALSE test has otherwise. The vector a TRUE if the observation is in the test set, and a FALSE otherwise. Note the ! in the command to create test causes TRUE stobeswitchedto FALSE sand vice versa. We also set a random seed so that the user will obtain the same training set/test set split. > set.seed(1) > train=sample(c(TRUE,FALSE), nrow(Hitters),rep=TRUE) > test=(!train) Now, we apply regsubsets() to the training set in order to perform best subset selection. ∼ .,data=Hitters[ > regfit.best=regsubsets(Salary train ,], nvmax=19) Notice that we subset the Hitters data frame directly in the call in or- der to access only the training subset of the data, using the expression Hitters[train,] . We now compute the validation set error for the best model of each model size. We first make a model matrix from the test data. ∼ .,data=Hitters[test,]) test.mat=model.matrix(Salary The model.matrix() function is used in many regression packages for build- model. ,we ing an “X” matrix from data. Now we run a loop, and for each size i matrix() regfit.best for the best model of that size, extract the coefficients from multiply them into the appropriate columns of the test model matrix to form the predictions, and compute the test MSE. > val.errors=rep(NA,19) > for(i in 1:19){ + coefi=coef(regfit.best,id=i)

264 6.5 Lab 1: Subset Selection Methods 249 + pred=test.mat[,names(coefi)]%*%coefi + val.errors[i]=mean(( Hitters$Salary[ test]-pred)^2) } We find that the best model is the one that contains ten variables. > val.errors [1] 220968 169157 178518 163426 168418 171271 162377 157909 [9] 154056 148162 151156 151742 152214 157359 158541 158743 [17] 159973 159860 160106 > which.min(val.errors) [1] 10 > coef(regfit.best,10) (Intercept) AtBat Hits Walks CAtBat -80.275 -1.468 7.163 3.643 -0 .186 CHits CHmRun CWalks LeagueN DivisionW .748 1.105 1.384 -0 -53.029 84.558 PutOuts 0.238 predict() method This was a little tedious, partly because there is no regsubsets() . Since we will be using this function again, we can capture for our steps above and write our own predict method. > predict.regsubsets=function(object , newdata ,id ,...){ + form=as.formula(object$call [[2]]) + mat=model.matrix(form,newdata) + coefi=coef(object ,id=id) + xvars=names(coefi) + mat[,xvars]%*%coefi +} Our function pretty much mimics what we did above. The only complex part is how we extracted the formula used in the call to regsubsets() .We demonstrate how we use this function below, when we do cross-validation. ection on the full data set, and select Finally, we perform best subset sel the best ten-variable model. It is i mportant that we make use of the full data set in order to obtain more accura te coefficient estimates. Note that we perform best subset selection on the full data set and select the best ten- variable model, rather than simply using the variables that were obtained from the training set, because the best ten-variable model on the full data set may differ from the corresponding model on the training set. > regfit.best=regsubsets(Salary ∼ .,data=Hitters , nvmax=19) > coef(regfit.best,10) (Intercept) AtBat Hits Walks CAtBat 162.535 -2.169 6.918 5.773 -0 .130 CRuns CRBI CWalks DivisionW PutOuts 1.408 0.774 -0 .831 -112.380 0.297 Assists 0.283

265 250 6. Linear Model Selection and Regularization iable model on the full data set has a In fact, we see that the best ten-var different set of variables than the best ten-variable model on the training set. We now try to choose among the models of different sizes using cross- validation. This approach is somewhat involved, as we must perform best within each of the k training sets . Despite this, we see that subset selection makes this job quite easy. First, we R with its clever subsetting syntax, create a vector that allocate k = 10 folds, and s each observation to one of we create a matrix in which we will store the results. > k=10 > set.seed(1) > folds=sample(1 :k,nrow(Hitters),replace=TRUE) > cv.errors=matrix(NA,k,19, dimnames=list(NULL, paste(1:19))) j Now we write a for loop that performs cross-validation. In the th fold, the elements of folds j are in the test set, and the remainder are in that equal the training set. We make our predictions for each model size (using our predict() method), compute the test errors on the appropriate subset, new and store them in the appropriate slot in the matrix . cv.errors > for(j in 1:k){ + best.fit=regsubsets(Salary ∼ .,data=Hitters[folds!=j,], nvmax=19) + for(i in 1:19){ + pred=predict(best.fit,Hitters[folds==j,],id=i) + cv.errors[ j,i]= mean( ( Hitters$Salary[ folds==j]-pred)^2) +} +} × i, j )th element corresponds This has given us a 10 19 matrix, of which the ( to the test MSE for the i th cross-validation fold for the best j -variable apply() function to average over the columns of this model. We use the apply() matrix in order to obtain a vector for which the j th element is the cross- validation error for the j -variable model. > mean.cv.errors=apply(cv.e rrors ,2, mean) > mean.cv.errors [1] 160093 140197 153117 151159 146841 138303 144346 130208 [9] 129460 125335 125154 128274 133461 133975 131826 131883 [17] 132751 133096 132805 > par(mfrow=c(1,1)) > plot(mean.cv. errors , type=’b’) We see that cross-validation selects an 11-variable model. We now perform best subset selection on the full data set in order to obtain the 11-variable model. > reg.best=regsubsets(Salary ∼ .,data=Hitters , nvmax=19) > coef(reg.best,11) (Intercept) AtBat Hits Walks CAtBat 135.751 -2.128 6.924 5.620 -0 .139

266 6.6 Lab 2: Ridge Regression and the Lasso 251 CRuns CRBI CWalks LeagueN DivisionW 1.455 0.785 -0 .823 43.112 -111.146 PutOuts Assists 0.289 0.269 6.6 Lab 2: Ridge Regression and the Lasso glmnet package in order to perform ridge regression and We will use the the lasso. The main function in this package is glmnet() ,whichcanbeused glmnet() to fit ridge regression models, lasso models, and more. This function has slightly different syntax from other model-fitting functions that we have encountered thus far in this book. In particular, we must pass in an x y vector, and we do not use the y ∼ x syntax. We will matrix as well as a now perform ridge regression and the lasso in order to predict Salary on Hitters e that the missing values have data. Before proceeding ensur the been removed from the data, a s described in Section 6.5. > x=model.matrix(Salary ∼ .,Hitters)[,-1] > y=Hitters$Salary The function is particularly useful for creating x ; not only model.matrix() does it produce a matrix corresponding to the 19 predictors but it also automatically transforms any qualitative variables into dummy variables. glmnet() can only take numerical, The latter property is important because quantitative inputs. 6.6.1 Ridge Regression The glmnet() alpha argument that determines what type function has an of model is fit. If alpha=0 alpha=1 then a ridge regression model is fit, and if then a lasso model is fit. We first fit a ridge regression model. > library(glmnet) > grid=10^seq(10,-2,length=100) > ridge.mod=glmnet(x,y,alpha=0,lambda=grid) By default the function performs ridge regression for an automati- glmnet() cally selected range of λ values. However, here we have chosen to implement 2 − 10 =10 to λ ,es- =10 the function over a grid of values ranging from λ sentially covering the full range of scenarios from the null model containing only the intercept, to the least square s fit. As we will see, we can also com- pute model fits for a particular value of λ that is not one of the original grid values. Note that by default, the glmnet() function standardizes the variables so that they are on the same scale. To turn off this default setting, standardize=FALSE . use the argument Associated with each value of λ is a vector of ridge regression coefficients, coef() . In this case, it is a 20 × 100 stored in a matrix that can be accessed by

267 252 6. Linear Model Selection and Regularization matrix, with 20 rows (one for each predictor, plus an intercept) and 100 columns (one for each value of λ ). > dim(coef(ridge.mod)) [1] 20 100 We expect the coefficient estimates to be much smaller, in terms of norm,  2 is used, as compared to when a small value of λ when a large value of λ is λ =11 , 498, along with their  used. These are the coefficients when norm: 2 > ridge.mod$lambda [50] [1] 11498 > coef(ridge.mod)[ ,50] (Intercept) AtBat Hits HmRun Runs 407.356 0.037 0.138 0.525 0.231 RBI Walks Years CAtBat CHits 0.240 0.290 1.108 0.003 0.012 CHmRun CRuns CRBI CWalks LeagueN 0.088 0.023 0.024 0.025 0.085 DivisionW PutOuts Assists Errors NewLeagueN -6.215 -0.021 0.301 0.016 0.003 > sqrt(sum(coef(ridge.mod)[-1,50]^2)) [1] 6.36 In contrast, here are the coefficients when λ = 705, along with their  2 norm. Note the much larger  norm of the coefficients associated with this 2 smaller value of . λ > ridge.mod$lambda [60] [1] 705 ,60] > coef(ridge.mod)[ (Intercept) AtBat Hits HmRun Runs 54.325 0.112 0.656 1.180 0.938 RBI Walks Years CAtBat CHits 0.847 1.320 2.596 0.011 0.047 CHmRun CRuns CRBI CWalks LeagueN 0.338 0.094 0.098 0.072 13.684 DivisionW PutOuts Assists Errors NewLeagueN -54.659 0.119 0.016 -0.704 8.612 > sqrt(sum(coef(ridge.mod)[-1,60]^2)) [1] 57.1 We can use the predict() function for a number of purposes. For instance, we can obtain the ridge regression coefficients for a new value of , say 50: λ > predict(ridge.mod,s=50,type=" coefficients") [1:20,] (Intercept) AtBat Hits HmRun Runs 48.766 -0.358 1.969 -1.278 1.146 RBI Walks Years CAtBat CHits 0.804 2.716 -6 .218 0.005 0.106 CHmRun CRuns CRBI CWalks LeagueN 0.624 0.221 0.219 -0.150 45.926 DivisionW PutOuts Assists Errors NewLeagueN -118.201 0.250 0.122 -3.279 -9.497

268 6.6 Lab 2: Ridge Regression and the Lasso 253 We now split the samples into a training set and a test set in order to estimate the test error of ridge regression and the lasso. There are two common ways to randomly split a data set. The first is to produce a random , elements and select the observations corresponding to TRUE FALSE vector of TRUE for the training data. The second is to randomly choose a subset of ; these can then be used as the indices for the numbers between 1 and n training observations. The two approaches work equally well. We used the e we demonstrate the latter approach. former method in Section 6.5.3. Her We first set a random seed so that the results obtained will be repro- ducible. > set.seed(1) > train=sample(1:nrow(x), nrow(x)/2) > test=(-train) > y.test=y[test] Next we fit a ridge regression model on the training set, and evaluate its MSE on the test set, using = 4. Note the use of the λ predict() function again. This time we get predictions for a test set, by replacing with the newx argument. type="coefficients" rain ,],y[ train],alpha=0,lambda= > ridge.mod=glmnet(x[t grid, thresh=1e-12) > ridge.pred=predict(ridge.mod,s=4,newx=x[test,]) > mean((ridge.pred-y.test)^2) [1] 101037 The test MSE is 101037. Note that if we had instead simply fit a model with just an intercept, we would have predicted each test observation using the mean of the training observations. In that case, we could compute the test set MSE like this: > mean((mean(y[train])-y.test)^2) [1] 193253 We could also get the same result by fitting a ridge regression model with 10 . 1e10 means 10 large value of .Notethat very a λ > ridge.pred=predict(ridge.mod,s=1e10,newx=x[ test,]) > mean((ridge.pred-y.test)^2) [1] 193253 So fitting a ridge regression model with λ = 4 leads to a much lower test MSE than fitting a model with just an intercept. We now check whether there is any benefit to performing ridge regression with λ = 4 instead of just performing least squares regression. Recall that least squares is simply 5 λ =0. ridge regression with 5 In order for glmnet() to yield the exact least squares coefficients when λ =0, exact=T when calling the predict() function. Otherwise, the we use the argument predict() function will interpolate over the grid of λ values used in fitting the

269 254 6. Linear Model Selection and Regularization > ridge.pred=predict(ridge.mod,s=0,newx=x[test,],exact=T) > mean((ridge.pred-y.test)^2) [1] 114783 x, subset=train) >lm(y ∼ coefficients")[1:20,] > predict(ridge.mod,s=0,exact=T,type=" In general, if we want to fit a (unpenalized) least squares model, then we should use the lm() function, since that function provides more useful outputs, such as standard errors and p-values for the coefficients. λ = 4, it would be better to In general, instead of arbitrarily choosing use cross-validation to choose the tuning parameter λ . We can do this using cv.glmnet() . By default, the function the built-in cross-validation function, cv.glmnet() performs ten-fold cross-validation, though this can be changed using the . Note that we set a random seed first so our results will be folds argument reproducible, since the choice of the cross-validation folds is random. > set.seed(1) rain ,],y[ train],alpha=0) > cv.out=cv.glmnet(x[t > plot(cv.out) > bestlam=cv.out$lambda .min > bestlam [1] 212 Therefore, we see that the value of λ that results in the smallest cross- validation error is 212. What is the test MSE associated with this value of λ ? test,]) > ridge.pred=predict(ridge.mod,s=bestlam ,newx=x[ > mean((ridge.pred-y.test)^2) [1] 96016 This represents a further improvement over the test MSE that we got using λ = 4. Finally, we refit our ridge regression model on the full data set, λ chosen by cross-validation, and examine the coefficient using the value of estimates. > out=glmnet(x,y,alpha=0) > predict(out,type=" coefficients",s= bestlam) [1:20,] (Intercept) AtBat Hits HmRun Runs 9.8849 0.0314 1.0088 0.1393 1.1132 RBI Walks Years CAtBat CHits 0.8732 1.8041 0.1307 0.0111 0.0649 CHmRun CRuns CRBI CWalks LeagueN 0.4516 0.1290 0.1374 0.0291 27.1823 DivisionW PutOuts Assists Errors NewLeagueN -91.6341 0.1915 0.0425 -1 .8124 7.2121 glmnet() model, yielding approximate results. When we use exact=T , there remains glmnet() when a slight discrepancy in the third decimal place between the output of lm() ; this is due to numerical approximation on the part of λ = 0 and the output of glmnet() .

270 6.6 Lab 2: Ridge Regression and the Lasso 255 are zero—ridge regression does not As expected, none of the coefficients perform variable selection! 6.6.2 The Lasso We saw that ridge regression with a wise choice of can outperform least λ squares as well as the null model on the Hitters data set. We now ask ore accurate or a more interpretable whether the lasso can yield either a m model than ridge regression. In order to fit a lasso model, we once again glmnet() function; however, this time we use the argument alpha=1 . use the we proceed just as we did in fitting a ridge model. Other than that change, rain ,],y[ train],alpha=1,lambda=grid) > lasso.mod=glmnet(x[t > plot(lasso.mod) We can see from the coefficient plot that depending on the choice of tuning parameter, some of the coefficients will be exactly equal to zero. We now perform cross-validation and compute the associated test error. > set.seed(1) > cv.out=cv.glmnet(x[t rain ,],y[ train],alpha=1) > plot(cv.out) > bestlam=cv.out$lambda .min > lasso.pred=predict(lasso.mod,s=bestlam ,newx=x[ test,]) > mean((lasso.pred-y.test)^2) [1] 100743 This is substantially lower than the test set MSE of the null model and of least squares, and very similar to the test MSE of ridge regression with λ chosen by cross-validation. However, the lasso has a substantial advantage over ridge regression in es are sparse. Here we see that 12 of that the resulting coefficient estimat the 19 coefficient estimates are exactly zero. So the lasso model with λ chosen by cross-validation contains only seven variables. > out=glmnet(x,y,alpha=1,lambda=grid) > lasso.coef=predict(out,type=" coefficients",s= bestlam) [1:20,] > lasso.coef (Intercept) AtBat Hits HmRun Runs 18.539 0.000 1.874 0.000 0.000 RBI Walks Years CAtBat CHits 0.000 2.218 0.000 0.000 0.000 CHmRun CRuns CRBI CWalks LeagueN 0.000 0.207 0.413 0.000 3.267 DivisionW PutOuts Assists Errors NewLeagueN -103.485 0.220 0.000 0.000 0.000 > lasso.coef[lasso.coef!=0] (Intercept) Hits Walks CRuns CRBI 18.539 1.874 2.218 0.207 0.413 LeagueN DivisionW PutOuts 3.267 -103.485 0.220

271 256 6. Linear Model Selection and Regularization 6.7 Lab 3: PCR and PLS Regression 6.7.1 Principal Components Regression Principal components regression (PCR) can be performed using the pcr() pcr() pls Hitters function, which is part of the library. We now apply PCR to the data, in order to predict Salary . Again, ensure that the missing values have been removed from the data, a s described in Section 6.5. > library(pls) > set.seed(2) > pcr.fit=pcr(Salary ∼ ., data=Hit ters , scale= TRUE, validation ="CV") pcr() The syntax for the lm() , with a few function is similar to that for additional options. Setting scale=TRUE standardizing each has the effect of predictor, using (6.6), prior to generating the principal components, so that the scale on which each variable is measured will not have an effect. Setting causes validation="CV" to compute the ten-fold cross-validation error pcr() for each possible value of M , the number of principal components used. The summary() . resulting fit can be examined using > summary(pcr.fit) Data: X dimension: 263 19 Y dimension: 263 1 Fit method: svdpc Number of components considered: 19 VALIDATION: RMSEP Cross-validated using 10 random segments. (Intercept) 1 comps 2 comps 3 comps 4 comps CV 452 348.9 352.2 353.5 352.8 adjCV 452 348.7 351.8 352.9 352.1 ... TRAINING: % variance explained 1 comps 2 comps 3 comps 4 comps 5 comps 6 comps X 38.31 60.16 70.84 79.03 84.29 88.63 Salary 40.63 41.58 42.17 43.22 44.90 46.48 ... The CV score is provided for each possible number of components, ranging from M = 0 onwards. (We have printed the CV output only up to M =4.) ; in order to obtain pcr() reports the root mean squared error Note that antity. For instance, a root mean the usual MSE, we must square this qu 2 , = 124 468. 8 8 corresponds to an MSE of 352 . squared error of 352 . One can also plot the cross-validation scores using the validationplot() validation function. Using val.type="MSEP" will cause the cross-validation MSE to be plot() plotted. > validationplot(pcr.fit,val. type="MSEP")

272 6.7 Lab 3: PCR and PLS Regression 257 ss-validation error occurs when M We see that the smallest cro =16com- = 19, which amounts to M ponents are used. This is barely fewer than simply performing least squares, because when all of the components are used in PCR no dimension reduction occurs. However, from the plot we also see that the cross-validation error is roughly the same when only one component is included in the model. This suggests that a model that uses just a small number of components might suffice. function also provides the percentage of variance explained summary() The in the predictors and in the response using different numbers of compo- nents. This concept is discussed in greater detail in Chapter 10. Briefly, we can think of this as the amount of information about the predictors or the response that is captured using principal components. For example, M setting = 1 only captures 38 . 31 % of all the variance, or information, in M M 63 %. If . the predictors. In contrast, using = 6 increases the value to 88 M we were to use all p = 19 components, this would increase to 100 %. = We now perform PCR on the training data and evaluate its test set performance. > set.seed(1) > pcr.fit=pcr(Salary ∼ ters , subset= train , scale= TRUE, ., data=Hit validation ="CV") > validationplot(pcr.fit,val. type="MSEP") M =7 Now we find that the lowest cross-validation error occurs when component are used. We compute the test MSE as follows. > pcr.pred=predict(pcr.fit,x[test,],ncomp=7) > mean((pcr.pred-y.test)^2) [1] 96556 This test set MSE is competitive with the results obtained using ridge re- gression and the lasso. However, as a result of the way PCR is implemented, the final model is more difficult to interpret because it does not perform any kind of variable selection or even directly produce coefficient estimates. M =7,thenumberof Finally, we fit PCR on the full data set, using components identified by cross-validation. > pcr.fit=pcr(y ∼ x,scale=TRUE,ncomp=7) > summary(pcr.fit) Data: X dimension: 263 19 Y dimension: 263 1 Fit method: svdpc Number of components considered: 7 TRAINING: % variance explained 1 comps 2 comps 3 comps 4 comps 5 comps 6 comps X 38.31 60.16 70.84 79.03 84.29 88.63 y 40.63 41.58 42.17 43.22 44.90 46.48 7 comps X 92.26 y 46.69

273 258 6. Linear Model Selection and Regularization 6.7.2 Partial Least Squares We implement partial least squares (PLS) using the plsr() function, also plsr() pls in the pcr() function. library. The syntax is just like that of the > set.seed(1) ∼ ., data= Hitters , subset= train, scale= TRUE, > pls.fit=plsr(Salary validation ="CV") > summary(pls.fit) Data: X dimension: 131 19 Y dimension: 131 1 Fit method: kernelpls Number of components considered: 19 VALIDATION: RMSEP Cross-validated using 10 random segments. (Intercept) 1 comps 2 comps 3 comps 4 comps CV 464.6 394.2 391.5 393.1 395.0 adjCV 464.6 393.4 390.2 391.1 392.9 ... TRAINING: % variance explained 1 comps 2 comps 3 comps 4 comps 5 comps 6 comps X 38.12 53.46 66.05 74.49 79.33 84.56 Salary 33.58 38.96 41.57 42.43 44.04 45.59 ... > validationplot(pls.fit,val. type="MSEP") The lowest cross-validation error occurs when only M =2partialleast squares directions are used. We now evaluate the corresponding test set MSE. > pls.pred=predict(pls.fit,x[test,],ncomp=2) > mean((pls.pred-y.test)^2) [1] 101417 The test MSE is comparable to, but slightly higher than, the test MSE obtained using ridge regression, the lasso, and PCR. Finally, we perform PLS using the full data set, using M =2,thenumber of components identified by cross-validation. > pls.fit=plsr(Salary ∼ ., data= Hitters , scale= TRUE, ncomp=2) > summary(pls.fit) Data: X dimension: 263 19 Y dimension: 263 1 Fit method: kernelpls Number of components considered: 2 TRAINING: % variance explained 1 comps 2 comps X 38.08 51.03 Salary 43.05 46.40 Notice that the percentage of variance in Salary that the two-component PLS fit explains, 46 . 40 %, is almost as much as that explained using the

274 6.8 Exercises 259 . 69 %. This is because PCR only final seven-component model PCR fit, 46 attempts to maximize the amount of variance explained in the predictors, while PLS searches for directions that explain variance in both the predic- tors and the response. 6.8 Exercises Conceptual stepwise, and backward stepwise 1. We perform best subset, forward selection on a single data set. For each approach, we obtain p +1 , 1 , 2 ,...,p models, containing 0 predictors. Explain your answers: (a) Which of the three models with predictors has the smallest k RSS? training k predictors has the smallest (b) Which of the three models with RSS? test (c) True or False: k -variable model identified by forward i. The predictors in the stepwise are a subset of the predictors in the ( k +1)-variable model identified by forward stepwise selection. ii. The predictors in the k -variable model identified by back- k ward stepwise are a subset of the predictors in the ( +1)- variable model identified by backward stepwise selection. k -variable model identified by back- iii. The predictors in the ward stepwise are a subset of the predictors in the ( +1)- k variable model identified by forward stepwise selection. iv. The predictors in the k -variable model identified by forward stepwise are a subset of the predictors in the ( k +1)-variable model identified by backward stepwise selection. k -variable model identified by best v. The predictors in the subset are a subset of the predictors in the ( k +1)-variable model identified by best subset selection. 2. For parts (a) through (c), indicate which of i. through iv. is correct. Justify your answer. (a) The lasso, relative to least squares, is: i. More flexible and hence will give improved prediction ac- curacy when its increase in bia s is less than its decrease in variance. ii. More flexible and hence will give improved prediction accu- racy when its increase in varia nce is less than its decrease in bias.

275 260 6. Linear Model Selection and Regularization iii. Less flexible and hence will give improved prediction accu- racy when its increase in bia s is less than its decrease in variance. iv. Less flexible and hence will gi ve improved prediction accu- racy when its increase in varia nce is less than its decrease in bias. (b) Repeat (a) for ridge regression relative to least squares. (c) Repeat (a) for non-linear met hods relative to least squares. 3. Suppose we estimate the regression coefficients in a linear regression model by minimizing ⎛ ⎞ p p n ∑ ∑ ∑ ⎝ ⎠ − s β | − β β x |≤ y subject to ij j j i 0 =1 j j =1 =1 i for a particular value of s . For parts (a) through (e), indicate which of i. through v. is correct. Justify your answer. (a) As we increase s from 0, the training RSS will: i. Increase initially, and then eventually start decreasing in an inverted U shape. ii. Decrease initially, and then eventually start increasing in a Ushape. iii. Steadily increase. iv. Steadily decrease. v. Remain constant. (b) Repeat (a) for test RSS. (c) Repeat (a) for variance. (d) Repeat (a) for (squared) bias. (e) Repeat (a) for the irreducible error. 4. Suppose we estimate the regression coefficients in a linear regression model by minimizing ⎞ ⎛ p p n ∑ ∑ ∑ 2 ⎝ ⎠ y λ + β − β − x β ij 0 i j j =1 j j =1 i =1 for a particular value of λ . For parts (a) through (e), indicate which of i. through v. is correct. Justify your answer.

276 6.8 Exercises 261 λ (a) As we increase from 0, the training RSS will: i. Increase initially, and then eventually start decreasing in an inverted U shape. ii. Decrease initially, and then eventually start increasing in a Ushape. iii. Steadily increase. iv. Steadily decrease. v. Remain constant. (b) Repeat (a) for test RSS. (c) Repeat (a) for variance. (d) Repeat (a) for (squared) bias. (e) Repeat (a) for the irreducible error. 5. It is well-known that ridge regression tends to give similar coefficient values to correlated variables, whereas the lasso may give quite dif- ted variables. We will now explore ferent coefficient values to correla this property in a very simple setting. .Furthermore, = x x , x = p =2, x Suppose that =2, n 21 12 22 11 suppose that y =0,sothat + y x =0and x + + x x =0and 21 11 1 12 2 22 the estimate for the intercept in a least squares, ridge regression, or ˆ =0. lasso model is zero: β 0 (a) Write out the ridge regression optimization problem in this set- ting. (b) Argue that in this setting, the ridge coefficient estimates satisfy ˆ ˆ = β . β 2 1 (c) Write out the lasso optimization problem in this setting. ˆ ˆ and β are (d) Argue that in this setting, the lasso coefficients β 1 2 not unique—in other words, there are many possible solutions to the optimization problem in (c). Describe these solutions. 6. We will now explore (6.12) and (6.13) further. and λ> 0, (a) Consider (6.12) with p = 1. For some choice of y 1 plot (6.12) as a function of β . Your plot should confirm that 1 (6.12) is solved by (6.14). 0, λ> and y = 1. For some choice of p (b) Consider (6.13) with 1 plot (6.13) as a function of β . Your plot should confirm that 1 (6.13) is solved by (6.15).

277 262 6. Linear Model Selection and Regularization 7. We will now derive the Bayesian connection to the lasso and ridge regression discussed in Section 6.2.2. ∑ p β are inde- + (a) Suppose that = ,..., y x  β where +  i 1 ij i j n 0 j =1 2 ) distribution. pendent and identically distributed from a (0 ,σ N Write out the likelihood for the data. ,...,β are independent (b) Assume the following prior for : β β 1 p and identically distributed according to a double-exponential b distribution with mean 0 and common scale parameter : i.e. 1 | in this exp( −| β /b ). Write out the posterior for β ( p )= β b 2 setting. mode for (c) Argue that the lasso estimate is the under this pos- β terior distribution. are independent ,...,β β : β (d) Now assume the following prior for p 1 and identically distributed according to a normal distribution with mean zero and variance c . Write out the posterior for β in this setting. (e) Argue that the ridge regression estimate is both the mode and the mean under this posterior distribution. for β Applied 8. In this exercise, we will generate simulated data, and will then use this data to perform best subset selection. rnorm() function to generate a predictor X of length (a) Use the = 100, as well as a noise vector  of length n = 100. n Y of length n = 100 according to (b) Generate a response vector the model 3 2 X , + β β X + β + X + β = Y 2 0 1 3 β where , β are constants of your choice. , β β ,and 0 3 1 2 (c) Use the regsubsets() function to perform best subset selection in order to choose the best model containing the predictors 2 10 X,X ,...,X . What is the best model obtained according to 2 R ? Show some plots to provide evidence ,BIC,andadjusted C p for your answer, and report the coefficients of the best model ob- function to data.frame() tained. Note you will need to use the create a single data set containing both X and Y .

278 6.8 Exercises 263 (d) Repeat (c), using forward stepwise selection and also using back- wards stepwise selection. How does your answer compare to the results in (c)? 2 X,X (e) Now fit a lasso model to the simulated data, again using , 10 ...,X lidation to select the optimal as predictors. Use cross-va . Create plots of the cross-validation error as a function λ value of . Report the resulting coefficient estimates, and discuss the of λ results obtained. Y according to the model (f) Now generate a response vector 7 , + β + X Y β = 7 0 and perform best subset selection and the lasso. Discuss the results obtained. 9. In this exercise, we will predict t he number of applications received using the other variables in the College data set. (a) Split the data set into a training set and a test set. (b) Fit a linear model using least squares on the training set, and report the test error obtained. (c) Fit a ridge regression model on the training set, with λ chosen by cross-validation. Report the test error obtained. (d) Fit a lasso model on the training set, with chosen by cross- λ validation. Report the test error obtained, along with the num- ber of non-zero coefficient estimates. M chosen by cross- (e) Fit a PCR model on the training set, with validation. Report the test error obtained, along with the value M selected by cross-validation. of M chosen by cross- (f) Fit a PLS model on the training set, with validation. Report the test error obtained, along with the value of M selected by cross-validation. (g) Comment on the results obtained. How accurately can we pre- dict the number of college applic ations received? Is there much difference among the test errors resulting from these five ap- proaches? 10. We have seen that as the number of features used in a model increases, the training error will necessarily decrease, but the test error may not. We will now explore this in a simulated data set. (a) Generate a data set with p =20features, n =1 , 000 observa- tions, and an associated quantitative response vector generated according to the model Y = Xβ + , where β has some elements that are exactly equal to zero.

279 264 6. Linear Model Selection and Regularization (b) Split your data set into a training set containing 100 observations and a test set containing 900 observations. (c) Perform best subset selection on the training set, and plot the training set MSE associated with the best model of each size. (d) Plot the test set MSE associated with the best model of each size. (e) For which model size does the test set MSE take on its minimum value? Comment on your results. If it takes on its minimum value for a model containing only an intercept or a model containing all of the features, then play around with the way that you are generating the data in (a) until you come up with a scenario in which the test set MSE is minimized for an intermediate model size. (f) How does the model at which the test set MSE is minimized compare to the true model used to generate the data? Comment on the coefficient values. √ ∑ p r 2 ˆ (g) Create a plot displaying β for a range of values − ) β ( j j j =1 r ˆ j is the th coefficient estimate for the best model of ,where β r j containing r coefficients. Comment on what you observe. How does this compare to the test MSE plot from (d)? data Boston 11. We will now try to predict per capita crime rate in the set. (a) Try out some of the regression methods explored in this chapter, such as best subset selection, the lasso, ridge regression, and PCR. Present and discuss results for the approaches that you consider. (b) Propose a model (or set of models) that seem to perform well on this data set, and justify your answer. Make sure that you are evaluating model performance us ing validation set error, cross- validation, or some other reasonable alternative, as opposed to using training error. (c) Does your chosen model involve all of the features in the data set? Why or why not?

280 7 Moving Beyond Linearity So far in this book, we have mostly focused on linear models. Linear models implement, and have advantages over are relatively simple to describe and other approaches in terms of interpretation and inference. However, stan- dard linear regression can have significant limitations in terms of predic- tive power. This is because the linearity assumption is almost always an approximation, and sometimes a poor one. In Chapter 6 we see that we can improve upon least squares using ridg e regression, the lasso, principal com- ponents regression, and other techniques. In that setting, the improvement is obtained by reducing the complexity of the linear model, and hence the variance of the estimates. But we are still using a linear model, which can only be improved so far! In this chapter we relax the linearity assumption while still attempting to maintain as much interpretability as possible. We do this by examining very simple extensions of linear models like polyno- mial regression and step functions, as well as more sophisticated approaches such as splines, local regression, and generalized additive models. • Polynomial regression extends the linear model by adding extra pre- dictors, obtained by raising each of the original predictors to a power. 2 3 ,and X , X For example, a cubic regression uses three variables, X , as predictors. This approach provides a simple way to provide a non- linear fit to data. • Step functions cut the range of a variable into K distinct regions in order to produce a qualitative variable. This has the effect of fitting a piecewise constant function. G. James et al., An Introduction to Statistical Learning: with Applications in R , 265 7, Springer Texts in Statistics 103, DOI 10.1007/978-1-4614-7138-7 © Springer Science+Business Media New York 2013

281 266 7. Moving Beyond Linearity are more flexible than polynomials and step • Regression splines functions, and in fact are an extension of the two. They involve di- X distinct regions. Within each region, into viding the range of K a polynomial function is fit to the data. However, these polynomials are constrained so that they join smoothly at the region boundaries, knots or . Provided that the interval is divided into enough regions, this can produce an extremely flexible fit. Smoothing splines • are similar to regression splines, but arise in a slightly different situation. Smoothing splines result from minimizing a residual sum of squares criterion subject to a smoothness penalty. is similar to splines, but differs in an important way. • Local regression The regions are allowed to overlap, and indeed they do so in a very smooth way. allowustoextendthemethodsaboveto • Generalized additive models deal with multiple predictors. In Sections 7.1–7.6, we present a number of approaches for modeling the relationship between a response X in a flexible Y and a single predictor way. In Section 7.7, we show that these approaches can be seamlessly inte- Y as a function of several predictors grated in order to model a response . ,...,X X p 1 7.1 Polynomial Regression Historically, the standard way to ext end linear regression to settings in which the relationship between the predictors and the response is non- linear has been to replace the standard linear model = β  + β + x y i 1 0 i i with a polynomial function 2 3 d + β x + β x = (7.1) + β x  , + ... + β x + β y 3 i d i 0 i 1 2 i i i where  is the error term. This approach is known as polynomial regression , i polynomial and in fact we saw an example of this method in Section 3.3.2. For large regression d , a polynomial regression allows us to produce an extremely enough degree ents in (7.1) can be easily estimated non-linear curve. Notice that the coeffici using least squares linear regression because this is just a standard linear 3 2 d ,x . Generally speaking, it is unusual ,x ,...,x x model with predictors i i i i to use d greater than 3 or 4 because for large values of d , the polynomial curve can become overly flexible and ca n take on some very strange shapes. This is especially true near the boundary of the X variable.

282 7.1 Polynomial Regression 267 Degree−4 Polynomial | | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | 0.20 300 250 ) 0.15 Age | 200 250 > 0.10 Wage 150 Wage ( Pr 100 0.05 50 | | | | | | | || | || | || ||| || || | | | | | ||| || | || ||| | | |||| || || | || | | || | || | || | ||| | || || || || || | | || | || | || | | || ||||| || | || ||| | || || || || ||| | | || || | | || | | | || || || || | || || || ||| | | || | || || | | ||| | | || | | || || | | || | || || ||| || |||| | || || | || | || || | | || || | | || | || || | | | ||| || || || || || || || || | || || || | | || || | || || ||| | |||| || | || || | | || || | | || || | |||| || || | ||| |||| || | |||| ||| | || | | | || ||| | | | | || || ||| || ||| | || | || || || | | | ||| | || | || || | ||| || || | | || || | || || | ||| || | || || | | || || | || | || ||| || ||| || || | | | | ||| || || | | | |||| || || | || | || | | ||| | ||| || | || | || | | | ||| || | | | || | |||| || || |||| || | || || ||| ||| | | || || | | || || |||| || | || | | || || || || | || || | || ||||| || || | | | || ||| | || | ||||| || ||| || ||| || | ||| ||| || || | ||| | | ||| || | || |||| | || | || || | || ||| | || || | ||| | | || || ||| || ||| | || | || | || | || | ||| | || || | | | || || | | || ||| || || || | | | || | || | || || ||| || | || | | || | ||| || || | ||| | || || || | | || || ||| || || || | || | | || || | |||| | || | || | | || | ||| | || | || | || || || || | | || | | || ||| || | | | ||| | || | | | || |||| | | | || |||| || | || || || || || | ||| || || | || | | | || | | | || | | |||| | | | || || || | ||| || | || || ||| | | | | ||| || || || ||| | ||| | | |||| | || | ||| | || || | || ||| | | || || | || | | | ||| | || | | || |||| | || || | || ||| | | || || || | || || | || ||| || ||| || || | | || | | | || ||| | | | | ||| | || | || | || | || || || | || || | || | | || | || ||| || || || | || | | || || | || | | || | || | | | | ||| || || | | || || | ||| || || || | || || || | | || | | || || | || | || || | | | | || | | || || | |||| | | | || || || | | || || | | | |||| ||| || || ||| || || | | || ||| | || | | || | ||| | ||| | || || || || || | | || ||| || || || | || || || || || || || | | | || | || || || | || | || | || | || | || ||| | || || | ||| | | | ||| ||| ||| | | | ||| | || || | || | | || || | || | || | | | || | || | || | || | ||| || | || ||||| || | || || | | | || | || || | | | || |||| | ||| || | ||| | | | || || || || | || || | ||| || | | | | || | || || || || || | | || || | || | | || | || || ||| || | | || | |||| | | || ||| || | || || || || | | |||| || | ||| || || | || || || ||| | || |||| | || | || | || | | || | | || | || || | || || | || | | || | || || || | | || | |||| | | || || |||| || || || | || || || | || | | ||| | ||| | | | || || || ||| || | || | || ||| | || | | | || | ||| || ||| | || || | || | || || || || || | ||| | || | || | | || || || || || | | | || ||| || || | || | || || | || | || | || || | | |||||| | || || | || || || || | | || || | || || | || | || || ||||| || || | || || | | | ||| | || | |||| ||| ||||| | || | | ||| || || | | || | | | | || || || | | | || || | | | | || | | | || | || || | || ||| || | || ||| | | || | | || | || || || || || | || | | || || ||| | || | || | || || ||| || || | | | ||| | | ||||| |||| || ||||| | | || ||| | || | | | ||| || | || | | ||| | | || || || || ||| | | || | || | | | | || ||| || | || || || || || | | ||| ||| || |||| | || | || | || || | | || || | || | | || || || | || || | || || | | || | | | | ||| || || || || || | || || | || || || | | | | | | || | | || || || || || | || | || ||| | ||| |||| || | || ||| | || || | || | || | | || | || | | | | ||| || || | | || | || | | | || ||| | || || | || | || | ||| | | || | || | || | ||| || | || | ||| || || | | || ||| | | || || || || || || | | || | | || | | || ||| || | || |||| || || | ||| | | || | | | || | ||| | || || ||| || || || || || | || | || | | || | | || | ||| | | | || || | | | || || || || || | || || || | || | || | | | | || || | || || | ||||| ||| ||| || || | | || | | ||| ||| | | || | || || | || || || | | || || || | ||| | || || | | || | | | | ||| | | ||| || ||| |||| || | || || ||| | | || | | | | || | || | || | || | | || || | ||| | | ||| | | || | || || | || | || | || || | ||| | ||| | || || | || | ||| || | || || || | | || | ||| | || || | || ||| | || | || || | | || | | || |||| | || || |||| | ||| || || | || || | || | || || | || | | || || | | || | || | ||| || | | || | | || || | | | |||| | | || | || | ||| | ||| || | || || | |||| | || ||| | || || ||| || || ||| | || || | | | ||| || ||| | || || | ||| || ||| | || || | |||| || ||| || | || |||| | || | || | ||| ||| || || | | || | || || || || || || ||| || || | || | | | | || || | | || | || | || | | || | || || | ||| | || | | | || | | || | | || | | || | | || || || | | || | | | | || || || || | |||| || | || || | || || | || | || 0.00 80 60 50 40 30 20 80 70 60 50 40 30 20 70 Age Age The Wage Left: The solid blue curve is a degree-4 polynomial FIGURE 7.1. data. (in thousands of dollars) as a function of , fit by least squares. The wage of age Right: dotted curves indicate an estimated 95 % confidence interval. We model the binary event wage>250 using logistic regression, again with a degree-4 polynomial. wage exceeding $250 , 000 is shown in blue, along The fitted posterior probability of with an estimated 95 % confidence interval. The left-hand panel in Figure 7.1 is a plot of against age for the wage data set, which contains income and demographic information for Wage males who reside in the central Atlantic region of the United States. We see the results of fitting a degree-4 polynomial using least squares (solid blue curve). Even though this is a linear regression model like any other, the individual coefficients are not of particular interest. Instead, we look at age from 18 to 80 in the entire fitted function across a grid of 62 values for age and wage . order to understand the relationship between × ) In Figure 7.1, a pair of dotted curves accompanies the fit; these are (2 standard error curves. Let’s see how these arise. Suppose we have computed , x : age the fit at a particular value of 0 4 3 2 ˆ ˆ ˆ ˆ ˆ ˆ β . x β + (7.2) β )= x β + + β x x + ( f x 3 4 2 0 1 0 0 0 0 0 ˆ )? Least squares returns variance f ( x What is the variance of the fit, i.e. Var 0 ˆ β estimates for each of the fitted coefficients , as well as the covariances j between pairs of coefficient estima tes. We can use these to compute the 1 ˆ pointwise standard error of The estimated ). x f estimated variance of ( 0 ˆ ( x f ) is the square-root of this variance. This computation is repeated 0 1 4 3 2 T ˆ ˆ ,x β ), then ,andif  is the 5 If =(1 ,x 5 covariance matrix of the ,x C × ,x 0 j 0 0 0 0 T ˆ ˆ ( Var[ x f  )] = . C  0 0 0

283 268 7. Moving Beyond Linearity x , and we plot the fitted curve, as well as twice at each reference point 0 the standard error on either side of the fitted curve. We plot twice the standard error because, for normally distributed error terms, this quantity corresponds to an approximate 95 % confidence interval. It seems like the wages in Figure 7.1 are from two distinct populations: high earners group earning more than $250 , 000 per there appears to be a as a binary wage group. We can treat annum, as well as a low earners variable by splitting it into these two groups. Logistic regression can then age be used to predict this binary response, using polynomial functions of as predictors. In other words, we fit the model 2 d + β ... + ) x + β x x β + β exp( i 1 0 d 2 i i Pr( y )= 250 > x | (7.3) . i i 2 d β x + + β ) x 1+exp( β + ... + β x d 0 1 2 i i i The result is shown in the right-hand panel of Figure 7.1. The gray marks on the top and bottom of the panel indicate the ages of the high earners and the low earners. The solid blue curve indicates the fitted probabilities . The estimated 95 % confidence age of being a high earner, as a function of interval is shown as well. We see that here the confidence intervals are fairly wide, especially on the right-hand side. Although the sample size for this =3 , 000), there are only 79 high earners, which data set is substantial ( n ated coefficients and consequently results in a high variance in the estim wide confidence intervals. 7.2 Step Functions Using polynomial functions of the features as predictors in a linear model global X . We can instead imposes a structure on the non-linear function of step functions in order to avoid imposing such a global structure. Here use step function X into bins , and fit a different constant in each bin. we break the range of This amounts to converting a continuous variable into an ordered categorical variable . ordered create cutpoints c In greater detail, we , c , ,...,c X in the range of categorical K 1 2 variable andthenconstruct K + 1 new variables C X )= I ( X

284 7.2 Step Functions 269 Piecewise Constant | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | 0.20 300 250 ) 0.15 Age | 200 250 > 0.10 Wage 150 Wage ( Pr 100 0.05 50 || ||| || | || || | || | | |||| || | ||| || || | || || || ||| | || |||| | || | || || | | | || | | || | || || | || || | || | | || || || || || | | || | |||| | | || || |||| || | | || | || || || | || | | ||| | ||| || | || || || | || | | | || | ||| | || || ||| || || || | || | | || | || || ||| || | | ||| |||| | | || || ||| || || | | | | | || || | | || | || || | | | ||| || || || || || || || ||| || || || | | || || | || || ||| | |||| || | || || | | || || | | || || | |||| || || | ||| |||| || | ||| | ||| | || | | | || ||| | | | | || || ||| || ||| | || | || || || | | | ||| | || | || || | ||| || || | | || || | || || || | | || | || || | | || || | || | || ||| || ||| || || | | | | ||| || || | | | |||| || || | || | || | | ||| | ||| || | || | || | | | ||| || | | | || | |||| || || |||| || | || || ||| ||| | | || || | | |||| |||| || | || | | || || || || | || || | || ||||| || || | | | || || | || || ||||| || ||| || ||| || | ||| ||| || || || | | | | ||| || | || |||| | || | || || | || ||| | || || | || | | | || || ||| || ||| | || | || | || | || | ||| || ||| || | || || | | || ||| || || || | | | || | || | || || ||| || | || | || | | ||| || || | ||| | || || || | | || || ||| || || || | || | | || ||| |||| | ||| || | | || | || || || | || | || || || | || | || | | || || || || | ||| | || | | | || |||| | | | || |||| || | || || || || || | ||| || || | || | | | || | | | ||| | |||| | | | || || || | ||| || | || || ||| | | | | ||| || || || ||| | ||| | | |||| | || | ||| | || || | || ||| | | | || | | || | | | ||| | || | | || |||| | || || | || ||| | | || || || | || || | || ||| || || | | | || | || | || | || ||| | | | | ||| | || | || | || | || || || | || || | || | | || | || | || || || || | ||| | || || | || | | || | || | | | | ||| || || | | || || | ||| || || || | || || || | | || | | || || | || | || || | | | | || | | || || | |||| | | | || || || | | || || | | | |||| ||| || || || || | || | | || ||| | || | | || | ||| | ||| | || || || || || | | || ||| || || || | || || || | | || || || | | | || | || || || | || | || | || | || | || ||| | | || | ||| | | | ||| ||| ||| | | | ||| | || || | || | | || || | || | || | | | || || | | || | || | ||| || | || ||||| || | || || | | | || | || || | | | || ||| | | ||| || | ||| | | | || | | | | || || || || | | ||| | || || | || || || || || | | || | || | ||| || || || || || | || || | || || || | | | | | | || | | || || || || || | || | || ||| | ||| |||| || | || ||| | || || | || | | || | || | || | | | | ||| || || | | || | || | | | || ||| | || ||| | || | || | ||| | | || | || | || | ||| || | || | ||| || || | | || | | || || || ||| || | || | || ||| | || | | | || | || || ||| | || || | || | || || || || || | | || | || | | | | || || || || || | | | || ||| || || | || | || || | || | || | || || | | |||||| | || || | || || || || | | || || | || | | || | || || ||||| || || | | | || | | | ||| | || | |||| ||| ||||| | || | | ||| || || | || | | | || | | | || | | | || || | | | | || | | | || || | || | || ||| || | || ||| | | || | | || | || || || || || | || | | || || ||| | || | || | || || ||| || || | | || ||| | | ||||| |||| || ||||| | | || ||| | || | | | || || | || | || ||| | | || || || || ||| | | || | || | | | | || ||| | || || || || || | || | ||| ||| || |||| | || | || | || || | | || || | || | | || || | || || | | || | || || | |||| || ||| || || | || | | || || | | || || || || | | ||| | | | | || | | || ||| || || | | | | | ||| || | || ||| | | |||| || || | || | | || | || | || | ||| | || || || || || | | || | || | || | | || ||||| || | || | | || || || || ||| | | | || | | || | | | || || | || | || || || ||| | | || | || || | | ||| | | || | | || |||| || || | ||| | | || | | | || | ||| | || || ||| || || || || || | || | || | || || | | || | ||| | ||| || | || | | | || || || || || | || || || | || | || | | | | || || | || || | ||||| ||| ||| || || | | || | | ||| ||| | | || | || || | || | | || | | ||| || || | | || || | | || | | | | ||| | | ||| || ||| ||| | || | || || ||| | | || | | | | || | || | | | || | | || || | ||| | | ||| | | || | || || | || | || | || || | ||| || || | || || | || | ||| || | || || || | | || | ||| | || || | || ||| | || | || || | | || | | || |||| | || || |||| | ||| || || | || || | || | || | || || | | || || | | || | || | || || | ||| | || | | || || | | | |||| | ||| || | || | | | ||| || | || || || || | |||| | || ||| | || || ||| || || ||| | || || | | | ||| || ||| | || || | ||| || ||| | || || | |||| || || || | ||| | | || || || | | || | ||| || || | | || | || || || || || | ||| |||| || || ||| | | | | | || || | | || | ||| | || | | || || || | ||| | || | | | | || | || | ||| | | || | | | ||| | || || || | | ||| | || ||| | || | || || || | |||| || ||| || || | || || | || || || | | || || | || | || || |||| || || || 0.00 70 50 40 30 20 80 60 60 50 40 30 20 80 70 Age Age The data. Left: The solid curve displays the fitted value from FIGURE 7.2. Wage a least squares regression of wage (in thousands of dollars) using step functions . The dotted curves indicate an estimated 95 % confidence interval. Right: of age using logistic regression, again using step wage>250 We model the binary event . The fitted posterior probability of wage exceeding $250 , 000 is functions of age shown, along with an estimated 95 % confidence interval. se are sometimes called dummy variables. Notice equals 0 otherwise. The that for any value of X , C ( X )+ C must ( X )+ ... + C X ( X )=1,since K 0 1 + 1 intervals. We then use least squares to fit a K be in exactly one of the 2 ( X ) ,C ( : X )aspredictors ,...,C X ( ) linear model using C K 2 1 y β + β C ( x )+ β C ( x (7.5) )+ ... + β C ( x )+  . = 1 K 2 K i i i i 1 i 0 2 For a given value of X , at most one of C can be non-zero. ,...,C ,C 2 1 K β can , all of the predictors in (7.5) are zero, so X

285 270 7. Moving Beyond Linearity x )) x ( C + β β C + ( ... )+ β exp( 1 1 K 0 K i i (7.6) )= 250 | x y > Pr( i i ( + β )) C 1+exp( ( x β )+ ... + β C x K i 1 i 1 0 K in order to predict the probability that an individual is a high earner on the basis of age . The right-hand panel of Figure 7.2 displays the fitted posterior probabilities obtained using this approach. Unfortunately, unless there are natural breakpoints in the predictors, piecewise-constant functions can miss the action. For example, in the left- hand panel of Figure 7.2, the first bin clearly misses the increasing trend with age wage . Nevertheless, step function approaches are very popular of in biostatistics and epidemiology, among other disciplines. For example, 5-year age groups are often used to define the bins. 7.3 Basis Functions Polynomial and piecewise-constant re gression models are in fact special basis function approach. The idea is to have at hand a fam- cases of a basis X : ily of functions or transformations that can be applied to a variable function b ,...,b X ) ,b ,wefitthe ( X ) ( X ( X ). Instead of fitting a linear model in K 2 1 model (7.7) . = β  + β )+ b x ( x ( )+ β b b β ( x + )+ β ... b )+ ( x y 1 2 K 2 K i 1 i i 0 i i i 3 3 b Note that the basis functions ( · ) ,b ) are fixed and known. ( · ) ,...,b · ( K 1 2 (In other words, we choose the functions ahead of time.) For polynomial j regression, the basis functions are b x , and for piecewise constant )= x ( j i i x ( x ). We can think of (7.7) as

286 7.4 Regression Splines 271 7.4 Regression Splines Now we discuss a flexible class of basis functions that extends upon the onstant regression approaches that polynomial regression and piecewise c we have just seen. 7.4.1 Piecewise Polynomials X , Instead of fitting a high-degree polynomial over the entire range of piece- wise polynomial regression involves fitting separate low-degree polynomials piecewise over different regions of X . For example, a piecewis e cubic polynomial works polynomial regression by fitting a cubic regression model of the form 3 2 β , + β  x + + β x x (7.8) = + β y 3 2 i 1 i 0 i i i β , β , ,and β differ in different parts of the range where the coefficients β 3 0 2 1 X knots . . The points where the coefficients change are called of knot For example, a piecewise cubic with n o knots is just a standard cubic d = 3. A piecewise cubic polynomial with a polynomial, as in (7.1) with c takes the form single knot at a point { 3 2 β ; if + β  x

287 7. Moving Beyond Linearity 272 Piecewise Cubic Continuous Piecewise Cubic 250 250 200 200 150 150 Wage Wage 100 100 50 50 30 60 50 60 70 70 40 20 30 40 50 20 Age Age Cubic Spline Linear Spline 250 250 200 200 150 150 Wage Wage 100 100 50 50 20 40 50 60 70 40 70 30 60 20 30 50 Age Age Various piecewise polynomials are fit to a subset of the FIGURE 7.3. Wage age=50 . Top Left: data, with a knot at The cubic polynomials are unconstrained. Top Right: The cubic polynomials are constrained to be continuous at age=50 . Bottom Left: The cubic polynomials are constrained to be continuous, and to have continuous first and second derivatives. Bottom Right: A linear spline is shown, which is constrained to be continuous. under the that the fitted curve must be continuous. In other constraint age=50 . The top right plot in Figure 7.3 words, there cannot be a jump when shows the resulting fit. This looks better than the top left plot, but the V- shaped join looks unnatural. In the lower left plot, we have added two additional constraints: now both the first and second derivatives of the piecewise polynom ials are continuous derivative at . In other words, we are requiring that the piecewise polynomial age=50 . Each constraint age=50 , but also very smooth be not only continuous when that we impose on the piecewise cubic p olynomials effectively frees up one degree of freedom, by reducing the complexity of the resulting piecewise polynomial fit. So in the top left plo t, we are using eight degrees of free- dom, but in the bottom left plot we imposed three constraints (continuity, continuity of the first derivative, and continuity of the second derivative) and so are left with five degrees of freedom. The curve in the bottom left

288 7.4 Regression Splines 273 3 cubic spline In general, a cubic spline with K knots uses plot is called a . cubic spline K degrees of freedom. a total of 4 + In Figure 7.3, the lower right plot is a , which is continuous linear spline linear spline at . The general definition of a degree- d spline is that it is a piecewise age=50 polynomial, with continuity in derivatives up to degree d − 1at degree- d each knot. Therefore, a linear spline is obtained by fitting a line in each region of the predictor space defined by the knots, requiring continuity at each knot. . Of course, we could add age=50 In Figure 7.3, there is a single knot at more knots, and impose continuity at each. 7.4.3 The Spline Basis Representation The regression splines that we just saw in the previous section may have seemed somewhat complex: how can we fit a piecewise degree- d polynomial − 1 derivatives) be under the constraint that it (and possibly its first d continuous? It turns out that we can use the basis model (7.7) to represent a regression spline. A cubic spline with K knots can be modeled as = (7.9) β , + β  b )+ ( x x )+ β ( b b ( x β )+ ··· + y 0 +3 2 K +3 2 i i 1 1 i K i i ,...,b . The model ,b b for an appropriate choice of basis functions 2 K +3 1 (7.9) can then be fit using least squares. Just as there were several ways to repr esent polynomials, there are also many equivalent ways to represent cubi c splines using different choices of basis functions in (7.9). The most direct way to represent a cubic spline using (7.9) is to start off with a basis for a cubic polynomial—namely, 3 2 ,x —and then add one truncated power basis function per knot. x, x truncated A truncated power basis function is defined as power basis { 3 ( x − ξ ) if x>ξ 3 h − ξ ) x, ξ x ( )=( = (7.10) + , 0otherwise ) x, ξ h ( where is the knot. One can show that adding a term of the form ξ β 4 to the model (7.8) for a cubic polynomial will lead to a discontinuity in only the third derivative at ξ ; the function will remain continuous, with continuous first and second derivatives, at each of the knots. K knots, we In other words, in order to fit a cubic spline to a data set with perform least squares regression with an intercept and 3 + K predictors, of 2 3 ,...,h ,X are ,h ( X,ξ ξ ) ,h ( X,ξ ), where ) ,...,ξ ( X,ξ X,X the form K 1 1 K 2 the knots. This amounts to estimating a total of K + 4 regression coeffi- cients; for this reason, fitting a cubic spline with K knots uses K +4 degrees of freedom. 3 Cubic splines are popular because most human eyes cannot detect the discontinuity at the knots.

289 274 7. Moving Beyond Linearity Natural Cubic Spline Cubic Spline 250 200 150 Wage 100 50 50 30 60 70 20 40 Age FIGURE 7.4. A cubic spline and a natural cubic spline, with three knots, fit to Wage data. asubsetofthe Unfortunately, splines can have high variance at the outer range of the X takes on either a very small or very large predictors—that is, when Wage data with three knots. We see that value. Figure 7.4 shows a fit to the the confidence bands in the boundary region appear fairly wild. A natu- ral spline is a regression spline with additional :the boundary constraints natural is X function is required to be linear at the boundary (in the region where spline smaller than the smallest knot, or larger than the largest knot). This addi- tional constraint means that natural splines generally produce more stable estimates at the boundaries. In Figure 7.4, a natural cubic spline is also displayed as a red line. Note that the corresponding confidence intervals are narrower. 7.4.4 Choosing the Number and Locations of the Knots When we fit a spline, where should we place the knots? The regression spline is most flexible in regions that contain a lot of knots, because in those regions the polynomial coefficients can change rapidly. Hence, one option is to place more knots in places where we feel the function might vary most rapidly, and to place fewe r knots where it seems more stable. While this option can work well, in practice it is common to place knots in a uniform fashion. One way to do this is to specify the desired degrees of freedom, and then have the software automatically place the corresponding number of knots at uniform quantiles of the data. Wage data. As in Figure 7.4, we Figure 7.5 shows an example on the have fit a natural cubic spline with three knots, except this time the knot locations were chosen automatically as the 25th, 50th, and 75th percentiles

290 7.4 Regression Splines 275 Natural Cubic Spline | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | || | | | | | | | | | | | | | | | | | | | | | | | | | | | | 0.20 300 250 ) 0.15 Age | 200 250 > 0.10 Wage 150 Wage ( Pr 100 0.05 50 | || || | | | ||| || || ||| | || || || || || | || || || | | || || | || || ||| | |||| || | || || | | || || | | || || | |||| || || | ||| |||| | | | ||| || ||| || | | || | | || | | || | || || || || || | | || | || | | || || ||| || | | | || || || ||| || | || | || ||| | || | | | || | ||| || || | || || | || | || || || || || | ||| | || | || | | || || || || || | | | || ||| || || | || || | || | || | || | || || | | |||||| | || || | || || || || | | || || | || ||| | || | | || ||||| || || | || || | | | ||| | || | |||| ||| ||||| | || | | ||| || || | || | | | || | | || | | ||| | || | || | ||| | || || | | ||| ||| || || | | | | | || || | | || ||| || | | || || | || | || ||| || ||| || || | | | | | || || | | | |||| || || | || | || | | ||| | || || | || | || | | | ||| || | | | || | |||| || || |||| || | || || ||| ||| | | || || | | || || |||| || | || | | || || || || || || || | || ||||| || || | | | || ||| || || ||||| || ||| || ||| || | ||| ||| || || | ||| | | || | || | || |||| | || | || || | || ||| | || || | || | | | || || ||| || ||| || || || | || || | ||| | || || | | | || || | | || ||| || || || | | | || | || | || || ||| || | || | || | | ||| || || | ||| | || || || | | || || ||| || || || | || | | || || | |||| | ||| || | | || | ||| | || | || | || || || | || | || | | || ||| || || | ||| | || | | | ||| |||| | | | || |||| || | || || || || || | ||| || || | || | | | || | | | ||| | |||| | | | || || || | ||| || | || || ||| | | | | ||| || || || ||| | ||| | | | | || | ||| | || || | || ||| | | | || | || || | | | ||| | || | | || |||| | || || | || ||| | | || || || | || || | || ||| || ||| || || | || | || | || ||| | | | | ||| | || | || | || || | || || || | || || | || | | || | || | | | || || || | ||| | || || | || | | || | || | | | | ||| || || | | || || | ||| || || || | || || || | | || | | || || | || | || || | | || || | | || || | |||| | | | || || || | | || || | | | |||| ||| || || || | || || || | || ||| | || | || || | ||| | ||| | || || | || || | | || ||| || || |||| || || || | | || || || | | | || | || || || | || | || | || | || | || ||| | | || | ||| | | | ||| ||| ||| | | | ||| | || || | || | | || || | || | || | | | || || | | || | || | ||| || | || ||||| || | || || | | | || | || || | | | || ||| | | ||| || | ||| | | | | || || || | || || | ||| || | | | | || | || || || || || | | || || | || | | || | || || ||| || | | ||| |||| | | || ||| || | || || || || | | |||| || | ||| || || | || || || ||| | || |||| | || | || || | | | || | | || | || || | || || | || | | || | || || || | | || | |||| | | || || |||| || | | || | || || || | || | | ||| | ||| || | | || || | | || | | | | || || || || | | ||| | || || | || || || || || | | || | || | ||| || || || || || | || || | || || || | | | | | | || | | || || || || || | || | || ||| | ||| |||| | || || ||| | || || | || | | || | || | || | | | | ||| || || | | || | || | | || | ||| | || | | || | | | || ||| | | | | || || ||| || ||| || || | || || || | | | || || | | | | || | | | || | || || | || ||| || || | ||| | | || | | || || | || || || || | || | | || || ||| | || | || | || || ||| || || | | | ||| | | ||||| |||| || ||||| | | || ||| | || | | | || || | || | || ||| | | || || || || ||| | | || | || | | | | || ||| | || || || || || || | | ||| ||| || |||| | || | || | || || | | || || | || | | || || | || || | | || | || || | || || || ||| || || | || | | | ||| | | || || || || | | ||| | | | | || | | || ||| ||| | | | || | ||| || | || ||| | | |||| || || | || | | || | || | || | ||| | || || || || || | | || | || | || | | || | | || | || | || | | || || | || ||| | | ||| || | | || | | | || || | || | || || || ||| | | || | || || | | ||| | | || | | || |||| || || | ||| | | || | | | || | ||| | || || || || || || || || | || | || | || || | | || | ||| | || | || || | | | || || || | || | || || || | || | || | | | | || || | || || | ||||| ||| ||| || || | | | || | ||| ||| | | || | || || | || || || | | ||| || || | || || || | | || | | | | ||| | | ||| || ||| ||| | || | || || ||| | | || | | || | || | || | | | || | | || || | ||| | | ||| | | || | || || | || | || | || || | ||| || || | || || | || | ||| || | || || | || | | | | ||| | || || | || ||| | || | || || | | || | | || |||| | || || |||| | ||| || || | || || | ||||| | || || | | || || | | | | || | ||| || | || || | | || || | | | |||| | || || | || | || | ||| || | || || || || | |||| | || ||| | || || ||| || || || | | || || | | | ||| || ||| | || || | ||| || ||| | || || | |||| || | || | ||| || || || | || | ||| ||| || || | | || | || || || | || | ||| |||| || || || || | | | | || || | | || | ||| | || | | || || || || ||| | || | | | | || | || | ||| | | || | | | ||| | || || || | | ||| | || || | || | || || || | |||| || ||| || || | || || | || || ||| | || || | || | || || |||| || || || 0.00 80 60 50 40 30 20 70 70 60 50 40 30 20 80 Age Age FIGURE 7.5. A natural cubic spline function with four degrees of freedom is Wage data. Left: Asplineisfitto wage (in thousands of dollars) as fit to the afunctionof age Right: Logistic regression is used to model the binary event . wage>250 age . The fitted posterior probability of wage exceeding as a function of $250 , 000 is shown. of age our degrees of freedom. The ar- . This was specified by requesting f gument by which four degrees of freedom leads to three interior knots is 4 somewhat technical. How many knots should we use, or equivalently how many degrees of freedom should our spline contain? One option is to try out different num- bers of knots and see which produces the best looking curve. A somewhat more objective approach is to use cross-validation, as discussed in Chap- ters 5 and 6. With this method, we remove a portion of the data (say 10 %), fit a spline with a certain number of knots to the remaining data, and then use the spline to make predictions for the held-out portion. We repeat this process multiple times until each observation has been left out once, and then compute the overall cross-valid ated RSS. This procedure can be re- peated for different numbers of knots . Then the value of K giving the K smallest RSS is chosen. 4 There are actually five knots, including the two boundary knots. A cubic spline with five knots would have nine degrees of freedom. But natural cubic splines have two additional natural constraints at each boundary to enforce linearity, resulting in 9 − 4=5 degrees of freedom. Since this includes a constant, which is absorbed in the intercept, we count it as four degrees of freedom.

291 276 7. Moving Beyond Linearity 1680 1680 1660 1660 1640 1640 1620 1620 Mean Squared Error Mean Squared Error 1600 1600 246810 246810 Degrees of Freedom of Cubic Spline Degrees of Freedom of Natural Spline Ten-fold cross-validated mean squared errors for selecting the FIGURE 7.6. Wage wage degrees of freedom when fitting splines to the data. The response is age . and the predictor A natural cubic spline. Right: Acubicspline. Left: Figure 7.6 shows ten-fold cross-validated mean squared errors for splines with various degrees of freedom fit to the Wage data. The left-hand panel corresponds to a natural spline and the right-hand panel to a cubic spline. The two methods produce almost identical results, with clear evidence that a one-degree fit (a linear regression) is not adequate. Both curves flatten out quickly, and it seems that three degrees of freedom for the natural spline and four degrees of freedom for the cubic spline are quite adequate. In Section 7.7 we fit additive spline models simultaneously on several variables at a time. This could potentially require the selection of degrees of freedom for each variable. In cases like this we typically adopt a more pragmatic approach and set the degrees of freedom to a fixed number, say four, for all terms. 7.4.5 Comparison to Polynomial Regression Regression splines often give superior results to polynomial regression. This is because unlike polynomials, which must use a high degree (exponent in 15 ) to produce flexible fits, splines intro- X the highest monomial term, e.g. duce flexibility by increasing the number of knots but keeping the degree fixed. Generally, this approach produces more stable estimates. Splines also allow us to place more knots, and hence flexibility, over regions where the function f seems to be changing rapidly, and fewer knots where f appears more stable. Figure 7.7 compares a natural cubic spline with 15 degrees of Wage data set. The extra flexibil- freedom to a degree-15 polynomial on the ity in the polynomial produces undesirable results at the boundaries, while the natural cubic spline still provides a reasonable fit to the data.

292 7.5 Smoothing Splines 277 Natural Cubic Spline Polynomial Wage 100 150 200 250 300 50 30 60 40 70 80 20 50 Age On the data set, a natural cubic spline with 15 degrees FIGURE 7.7. Wage polynomial. Polynomials can show wild of freedom is compared to a degree- 15 behavior, especially near the tails. 7.5 Smoothing Splines 7.5.1 An Overview of Smoothing Splines ssion splines, which we create by spec- In the last section we discussed regre ifying a set of knots, producing a sequence of basis functions, and then using least squares to estimate the spline coefficients. We now introduce a somewhat different approach that also produces a spline. In fitting a smooth curve to a set of data, what we really want to do is g ( x ), that fits the observed data well: that is, we find some function, say ∑ n 2 to be small. However, there is a problem ( y − g ( x )) want RSS = i i =1 i with this approach. If we don’t put any constraints on g ( x ), then we can i g interpolates always make RSS zero simply by choosing all such that it . Such a function would woefully overfit the data—it would be far y of the i too flexible. What we really want is a function g that makes RSS small, but that is also smooth . How might we ensure that g is smooth? There are a number of ways to do this. A natural approach is to find the function g that minimizes ∫ n ∑ 2 2 ′′ − g ( x λ )) (7.11) + ( y ) dt ( t g i i =1 i λ where tuning parameter . The function g that minimizes is a nonnegative (7.11) is known as a smoothing spline . smoothing What does (7.11) mean? Equation 7.11 takes the “Loss+Penalty” for- spline mulation that we encounter in the context of ridge regression and the lasso ∑ n 2 x that encour- loss function − g ( y ( )) is a in Chapter 6. The term i i i =1 ∫ loss function ′′ 2 ( ages g g to fit the data well, and the term t ) λ dt is a penalty term

293 278 7. Moving Beyond Linearity ′′ g that penalizes the variability in ( t ) indicates the second . The notation g ′ . The first derivative g g derivative of the function ) measures the slope t ( of a function at t , and the second derivative corresponds to the amount by which the slope is changing. Hence, broadly speaking, the second derivative of a function is a measure of its roughness : it is large in absolute value if t )isverywigglynear t , and it is close to zero otherwise. (The second g ( derivative of a straight line is zero; note that a line is perfectly smooth.) ∫ integral The , which we can think of as a summation over notation is an ∫ 2 ′′ t ) dt is simply a measure of the total ( g .Inotherwords, t the range of ′ change in the function g t ( is very smooth, then ), over its entire range. If g ∫ ′ ′′ 2 g t ( t ) ) will be close to constant and dt will take on a small value. ( g ′ ( t ) will vary significantly and Conversely, if is jumpy and variable than g g ∫ ∫ 2 ′′ ′′ 2 g t will take on a large value. Therefore, in (7.11), ) g t ( λ ) ( dt en- dt courages g to be smooth. The larger the value of λ ,thesmoother g will be. When λ = 0, then the penalty term in (7.11) has no effect, and so the will be very jumpy and will exactly interpolate the training function g λ →∞ , g will be perfectly smooth—it will just be observations. When a straight line that passes as closely as possible to the training points. will be the linear least squares line, since the loss In fact, in this case, g function in (7.11) amounts to minimizing the residual sum of squares. For λ , g will approximate the training observations an intermediate value of but will be somewhat smooth. We see that λ controls the bias-variance trade-off of the smoothing spline. g ( x ) that minimizes (7.11) can be shown to have some spe- The function olynomial with knots at the unique cial properties: it is a piecewise cubic p ,...,x , and continuous first and second derivatives at each x values of 1 n knot. Furthermore, it is linear in the region outside of the extreme knots. the function g ( x ) that minimizes (7.11) is a natural cubic In other words, ,...,x ! However, it is not the same natural cubic x spline with knots at 1 n spline that one would get if one applied the basis function approach de- ,...,x shrunken —rather, it is a x scribed in Section 7.4.3 with knots at n 1 version of such a natural cubic spline, where the value of the tuning pa- λ in (7.11) controls the level of shrinkage. rameter λ 7.5.2 Choosing the Smoothing Parameter We have seen that a smoothing spline is simply a natural cubic spline with knots at every unique value of x . It might seem that a smoothing i spline will have far too many degrees of freedom, since a knot at each data point allows a great deal of flexibility. But the tuning parameter λ controls the roughness of the smoothing spline, and hence the effective degrees of freedom . It is possible to show that as λ increases from 0 to ∞ , the effective effective to 2. n degrees of freedom, which we write df , decrease from degrees of λ freedom In the context of smoothing splines, why do we discuss effective degrees of freedom instead of degrees of freed om? Usually degrees of freedom refer

294 7.5 Smoothing Splines 279 to the number of free parameters, such as the number of coefficients fit in a parameters polynomial or cubic spline. Although a smoothing spline has n parameters are heavily n n and hence nominal degrees of freedom, these is a measure of the flexibility of the constrained or shrunk down. Hence df λ smoothing spline—the higher it is, the more flexible (and the lower-bias but e. The definition of effective degrees of higher-variance) the smoothing splin freedom is somewhat technical. We can write ˆ g = S y , (7.12) λ λ ˆ is the solution to (7.11) for a particular choice of g where λ —that is, it is a -vector containing the fitted values of the smoothing spline at the training n x points ,...,x . Equation 7.12 indicates that the vector of fitted values n 1 × n when applying a smoothing spline to the data can be written as a n (for which there is a formula) times the response vector y .Then S matrix λ the effective degrees of freedom is defined to be n ∑ df = (7.13) { S , } ii λ λ =1 i the sum of the diagonal elements of the matrix S . λ In fitting a smoothing spline, we do not need to select the number or location of the knots—there will be a knot at each training observation, ,...,x . Instead, we have another problem: we need to choose the value x n 1 λ of . It should come as no surprise that one possible solution to this problem is cross-validation. In other words, we can find the value of λ that makes leave- the cross-validated RSS as small as possible. It turns out that the cross-validation error (LOOCV) can be computed very efficiently one-out for smoothing splines, with essentially the same cost as computing a single fit, using the following formula: ] [ n n 2 ∑ ∑ y − g x ( ) ˆ λ i i ) − ( i 2 . )) − = ˆ g ( ( λ )= y ( x RSS i i cv λ 1 −{ S } λ ii =1 i =1 i − ( i ) ) indicates the fitted value for this smoothing spline x ( g The notation ˆ i λ , where the fit uses all of the training observations except x evaluated at i for the i th observation ( x ) indicates the smoothing ,y x ). In contrast, ˆ g ( λ i i i . spline function fit to all of the training observations and evaluated at x i This remarkable formula says that we can compute each of these leave- 5 fits using only ˆ g one-out all of the data! , the original fit to We have λ a very similar formula (5.2) on page 180 in Chapter 5 for least squares linear regression. Using (5.2), we can very quickly perform LOOCV for the regression splines discussed earlie r in this chapter, as well as for least squares regression using arbitrary basis functions. 5 g ( x The exact formulas for computing ˆ )and S are very technical; however, efficient i λ algorithms are available for computing these quantities.

295 280 7. Moving Beyond Linearity Smoothing Spline 16 Degrees of Freedom 6.8 Degrees of Freedom (LOOCV) 300 200 Wage 50 100 0 80 20 50 30 60 70 40 Age FIGURE 7.8. Smoothing spline fits to the Wage data. The red curve results 16 from specifying λ was found effective degrees of freedom. For the blue curve, automatically by leave-one-out cross-validation, which resulted in . 8 effective 6 degrees of freedom. Figure 7.8 shows the results from fitting a smoothing spline to the Wage data. The red curve indicates the fit obtained from pre-specifying that we would like a smoothing spline with 16 effective degrees of freedom. The blue λ curve is the smoothing spline obtained when is chosen using LOOCV; in this case, the value of λ chosen results in 6 . 8 effective degrees of freedom (computed using (7.13)). For this data, there is little discernible difference between the two smoothing splines, beyond the fact that the one with 16 degrees of freedom seems slightly wigglier. Since there is little difference . between the two fits, the smoothing spline fit with 6 8 degrees of freedom is preferable, since in general simpler models are better unless the data provides evidence in suppor t of a more complex model. 7.6 Local Regression Local regression is a different approach for fitting flexible non-linear func- local tions, which involves computing the fit at a target point x using only the regression 0 nearby training observations. Figure 7.9 illustrates the idea on some simu- lated data, with one target point near 0 . 4, and another near the boundary at 0 . 05. In this figure the blue line represents the function f ( x )fromwhich the data were generated, and the light orange line corresponds to the local ˆ f ( x ). Local regression is described in Algorithm 7.1. regression estimate will differ for each K Note that in Step 3 of Algorithm 7.1, the weights 0 i value of x . In other words, in order to obtain the local regression fit at a 0 new point, we need to fit a new weighted least squares regression model by

296 7.6 Local Regression 281 Local Regression O O O O O O O O O O O O O O O 1.5 1.5 O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O 1.0 1.0 O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O 0.5 0.5 O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O 0.0 0.0 O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O −0.5 −0.5 O O O O O O −1.0 −1.0 0.8 0.0 0.4 0.2 0.0 1.0 0.6 1.0 0.8 0.6 0.4 0.2 Local regression illustrated on some simulated data, where the FIGURE 7.9. f x blue curve represents ) from which the data were generated, and the light ( ˆ ( x ) . The orange colored f orange curve corresponds to the local regression estimate , represented by the orange vertical line. x points are local to the target point 0 The yellow bell-shape superimposed on the plot indicates weights assigned to each ˆ x ) at is f x ( point, decreasing to zero with distance from the target point. The fit 0 0 obtained by fitting a weighted linear regression (orange line segment), and using ˆ ) x (orange solid dot) as the estimate f . ( the fitted value at x 0 0 minimizing (7.14) for a new set of weights. Local regression is sometimes referred to as a memory-based procedure, because like nearest-neighbors, we need all the training data each time we wish to compute a prediction. We will avoid getting into the technical details of local regression here—there are books written on the topic. In order to perform local regression, there are a number of choices to be K , and whether to fit made, such as how to define the weighting function a linear, constant, or quadratic regression in Step 3 above. (Equation 7.14 While all of these choices make some corresponds to a linear regression.) span difference, the most important choice is the ,definedinStep1above. s λ in smoothing The span plays a role like that of the tuning parameter splines: it controls the flexibility of the non-linear fit. The smaller the value of s ,themore local and wiggly will be our fit; alternatively, a very large value of s will lead to a global fit to the data using all of the training observations. We can again use cross-validation to choose s ,orwecan specify it directly. Figure 7.10 displays local linear regression fits on the Wage data, using two values of s . 7and0 . 2. As expected, the fit obtained :0 using s =0 . 7 is smoother than that obtained using s =0 . 2. The idea of local regression can be generalized in many different ways. ,...,X , one very useful general- ,X In a setting with multiple features X 1 2 p ization involves fitting a multiple linear regression model that is global in some variables, but local in another, such as time. Such varying coefficient

297 282 7. Moving Beyond Linearity = x Algorithm 7.1 Local Regression At X 0 = of training points whose x are closest s 1. Gather the fraction k/n i x to . 0 = K ( x ,x ) to each point in this neighborhood, K 2. Assign a weight 0 0 i i x so that the point furthest from has weight zero, and the closest 0 has the highest weight. All but these k nearest neighbors get weight zero. on the x using the 3. Fit a weighted least squares regression of the y i i ˆ ˆ β aforementioned weights, by finding β and that minimize 0 1 n ∑ 2 K . y ( (7.14) ) − β x − β i 1 i i 0 0 =1 i ˆ ˆ ˆ x 4. The fitted value at f ( x )= is given by β . + x β 1 0 0 0 0 models are a useful way of adapting a model to the most recently gathered varying data. Local regression also generalizes very naturally when we want to fit coefficient model X models that are local in a pair of variables and X , rather than one. 2 1 We can simply use two-dimensional neighborhoods, and fit bivariate linear regression models using the observations that are near each target point in two-dimensional space. Theoretically the same approach can be imple- mented in higher dimensions, using linear regressions fit to p -dimensional neighborhoods. However, local r egression can perform poorly if p is much larger than about 3 or 4 because there will generally be very few training . Nearest-neighbors regr ession, discussed in Chap- observations close to x 0 ter 3, suffers from a similar problem in high dimensions. 7.7 Generalized Additive Models In Sections 7.1–7.6, we present a number of approaches for flexibly predict- Y on the basis of a single predictor X . These approaches can ingaresponse egression. Here we explore the prob- be seen as extensions of simple linear r . ,...,X on the basis of several predictors, X lem of flexibly predicting Y p 1 This amounts to an extension of multiple linear regression. Generalized additive models (GAMs) provide a general framework for generalized extending a standard linear model by allowing non-linear functions of each additive model of the variables, while maintaining additivity . Just like linear models, GAMs additivity can be applied with both quantitative and qualitative responses. We first examine GAMs for a quantitative response in Section 7.7.1, and then for a qualitative response in Section 7.7.2.

298 7.7 Generalized Additive Models 283 Local Linear Regression Span is 0.2 (16.4 Degrees of Freedom) Span is 0.7 (5.3 Degrees of Freedom) 300 200 Wage 50 100 0 80 20 30 50 60 70 40 Age FIGURE 7.10. Local linear fits to the Wage data. The span specifies the fraction of the data used to compute the fit at each target point. 7.7.1 GAMs for Regression Problems A natural way to extend the multiple linear regression model + = β x + β β x + +  β ··· x + y 2 2 i 1 p 0 ip i 1 i i in order to allow for non-linear relationships between each feature and the response is to replace each linear component β x with a (smooth) non- j ij ( x ). We would then write the model as f linear function ij j p ∑ = β x + f  ( )+ y j 0 ij i i =1 j )+ β f ( x )+ f = ( x + ··· + f ( x )+  . (7.15) p i 0 ip 1 i i 1 2 2 This is an example of a GAM. It is called an additive model because we calculate a separate f for each , and then add together all of their X j j contributions. In Sections 7.1–7.6, we discuss many methods for fitting functions to a single variable. The beauty of GAMs is that we can use these methods as building blocks for fitting an additive model. In fact, for most of the methods that we have seen so far in this chapter, this can be done fairly trivially. Take, for example, natural splines, and consider the task of fitting the model  )+ education ( (7.16) f f ( year )+ f )+ ( age + β = wage 2 1 3 0

299 284 7. Moving Beyond Linearity Coll >Coll Coll an individual has completed. We fit the first two functions using natural splines. We fit the third function using a separate constant for each level, via the usual dummy variable approach of Section 3.3.1. Figure 7.11 shows the results of fitting the model (7.16) using least scussed in Section 7.4, natural splines squares. This is easy to do, since as di can be constructed using an appropriately chosen set of basis functions. Hence the entire model is just a big regression onto spline basis variables and dummy variables, all packed into one big regression matrix. Figure 7.11 can be easily interpreted. The left-hand panel indicates that age and education fixed, wage tends to increase slightly with year ; holding this may be due to inflation. The center panel indicates that holding education and year fixed, wage tends to be highest for intermediate val- ues of age , and lowest for the very young and very old. The right-hand tends to increase year age fixed, and wage panel indicates that holding with education : the more educated a person is, the higher their salary, on average. All of these findings are intuitive. are f and Figure 7.12 shows a similar triple of plots, but this time f 1 2 smoothing splines with four and five de grees of freedom, respectively. Fit- ting a GAM with a smoothing spline is not quite as simple as fitting a GAM of smoothing splines, least squares with a natural spline, since in the case gam() function in R cannot be used. However, standard software such as the can be used to fit GAMs using smoothing splines, via an approach known as backfitting . This method fits a model involving multiple predictors by backfitting

300 7.7 Generalized Additive Models 285 Coll >Coll

301 286 7. Moving Beyond Linearity The smoothness of the function f for the variable X ▲ can be sum- j j marized via degrees of freedom. ◆ The main limitation of GAMs is that the model is restricted to be additive. With many variables, important interactions can be missed. However, as with linear regression, we can manually add interaction terms to the GAM model by including additional predictors of the × X . In addition we can add low-dimensional interaction form X k j ) into the model; such terms can ( X ,X functions of the form f jk j k be fit using two-dimensional smoothers such as local regression, or two-dimensional splines (not covered here). For fully general models, we have to look for even more flexible approaches such as random forests and boosting, described in Chapter 8. GAMs provide a useful compromise between linear and fully nonparametric models. 7.7.2 GAMs for Classification Problems GAMscanalsobeusedinsituationswhere Y is qualitative. For simplicity, Y takes on values zero or one, and let p ( X here we will assume Y = )=Pr( 1 X ) be the conditional probability (given the predictors) that the response | equals one. Recall the logistic regression model (4.6): ( ) p X ) ( β + ··· (7.17) . + β X X + β X + log = β 2 p 2 1 p 0 1 ) X ( p − 1 This logit is the log of the odds of P ( Y =1 | X )versus P ( Y =0 | X ), which (7.17) represents as a linear function of the predictors. A natural way to extend (7.17) to allow for non-linear relationships is to use the model ( ) p ( X ) log . ) X ( (7.18) f + ··· + f )+ ( X X )+ f ( β = 2 1 0 p p 2 1 1 ( ) − X p Equation 7.18 is a logistic regression GAM. It has all the same pros and cons as discussed in the previous section for quantitative responses. We fit a GAM to the Wage data in order to predict the probability that an individual’s income exceeds $250 , 000 per year. The GAM that we fit takes the form ( ) ( X ) p , ) education ( + (7.19) )+ age ( + β f × year f log β = 1 2 0 3 ) ( p − 1 X where p . X )=Pr( wage > 250 | year , age , education ) (

302 7.8 Lab: Non-linear Modeling 287 Coll 250) . Each plot displays the fitted function and pointwise standard errors. The first function is linear in year , the second age ,andthethirda function a smoothing spline with five degrees of freedom in . There are very wide standard errors for the first step function for education level of education . library(ISLR) > attach(Wage)

303 288 7. Moving Beyond Linearity >Coll Coll HS fit=lm(wage ∼ poly(age,4),data=Wage) > coef(summary(fit)) Estimate Std. Error t value Pr(>|t|) (Intercept) 111.704 0.729 153.28 <2e-16 poly(age, 4)1 447.068 39.915 11.20 <2e-16 poly(age, 4)2 -478.316 39.915 - 11.98 <2e-16 poly(age, 4)3 125.522 39.915 3.14 0.0017 poly(age, 4)4 -77.911 39.915 0.0510 -1.95 This syntax fits a linear model, using the lm() function, in order to predict using a fourth-degree polynomial in age : poly(age,4) wage poly() com- .The mand allows us to avoid having to write out a long formula with powers age . The function returns a matrix whose columns are a basis of or- of thogonal polynomials , which essentially means that each column is a linear orthogonal . combination of the variables age , age^2 , age^3 and age^4 polynomial However, we can also use and to obtain , age^2 , poly() age age^4 age^3 directly, if we prefer. We can do this by using the raw=TRUE argument to function. Later we see that this does not affect the model in a poly() the asis clearly affects the coefficient meaningful way—though the choice of b estimates, it does not affect the fitted values obtained. > fit2=lm(wage ∼ poly(age,4,raw=T),data=Wage) > coef(summary(fit2)) Estimate Std. Error t value Pr(>|t|) (Intercept) -1.84e+02 6.00e+01 -3.07 0.002180 poly(age, 4, raw = T)1 2.12e+01 5.89e+00 3.61 0.000312 poly(age, 4, raw = T)2 -5.64e-01 2.06e-01 -2.74 0.006261

304 7.8 Lab: Non-linear Modeling 289 poly(age, 4, raw = T)3 6.81e-03 3.07e-03 2.22 0.026398 poly(age, 4, raw = T)4 -3.20e-05 1.64e-05 -1.95 0.051039 There are several other equivalent ways of fitting this model, which show- case the flexibility of the formula language in R . For example > fit2a=lm(wage ∼ age+I(age^2)+I(age^3)+I(age^4),data=Wage) > coef(fit2a) (Intercept) age I(age^2) I(age^3) I(age^4) 2.12e+01 -5.64e-01 6.81e-03 -3.20e-05 -1.84e+02 This simply creates the polynomial basis functions on the fly, taking care age^2 via the symbol has function I() (the ^ wrapper to protect terms like wrapper a special meaning in formulas). ∼ cbind(age,age^2,age^3,age^4),data=Wage) > fit2b=lm(wage cbind() This does the same more compactly, using the function for building a matrix from a collection of vect ors; any function call such as cbind() inside a formula also serves as a wrapper. at which we want predictions, and age We now create a grid of values for then call the generic function, specifying that we want standard predict() errors as well. > agelims=range(age) > age.grid=seq(from=agelims[1],to=agelims[2]) > preds=predict(fit,newdata=list(age=age.grid),se=TRUE) > se.bands=cbind(preds$fit +2*preds$se.fit,preds$fit -2*preds$se. fit) Finally, we plot the data and add the fit from the degree-4 polynomial. > par(mfrow=c(1,2),mar=c(4.5,4.5,1,1) ,oma=c(0,0,4,0)) > plot(age,wage,xlim=agelims ,cex=.5,col="darkgrey") > title(" Degree -4 Polynomial",outer=T) > lines(age.grid,preds$fit,lwd=2,col="blue") > matlines(age.grid,se. bands,lwd=1,col=" blue",lty=3) Here the mar and oma arguments to par() allow us to control the margins of the plot, and the function creates a figure title that spans both title() title() subplots. We mentioned earlier that whether or not an orthogonal set of basis func- tionsisproducedinthe poly() function will not affect the model obtained in a meaningful way. What do we mean by this? The fitted values obtained in either case are identical: > preds2=predict(fit2,newdata=list(age=age.grid),se=TRUE) > max(abs(preds$fit-preds2$fit)) [1] 7.39e-13 In performing a polynomial regression we must decide on the degree of the polynomial to use. One way to do this is by using hypothesis tests. We now fit models ranging from linear to a degree-5 polynomial and seek to determine the simplest model which is sufficient to explain the relationship

305 290 7. Moving Beyond Linearity wage between .Weusethe anova() function, which performs an and age anova() (ANOVA, using an F-test) in order to test the null analysis of variance analysis of M hypothesis that a model is sufficient to explain the data against the variance 1 is required. In order M alternative hypothesis that a more complex model 2 to use the function, M models: the and M anova() must be nested 2 1 M predictors in must be a subset of the predictors in M .Inthiscase, 2 1 we fit five different models and sequen tially compare the simpler model to the more complex model. > fit.1=lm(wage ∼ age,data=Wage) ∼ poly(age,2),data=Wage) > fit.2=lm(wage > fit.3=lm(wage ∼ poly(age,3),data=Wage) ∼ > fit.4=lm(wage poly(age,4),data=Wage) ∼ poly(age,5),data=Wage) > fit.5=lm(wage > anova(fit.1,fit.2,fit.3,fit.4,fit.5) Analysis of Variance Table Model 1: wage ∼ age Model 2: wage ∼ poly(age, 2) Model 3: wage ∼ poly(age, 3) ∼ poly(age, 4) Model 4: wage poly(age, 5) Model 5: wage ∼ Res.Df RSS Df Sum of Sq F Pr(>F) 1 2998 5022216 2 2997 4793430 1 228786 143.59 <2e-16 *** 3 2996 4777674 1 15756 9.89 0.0017 ** 4 2995 4771604 1 6070 3.81 0.0510 . 5 2994 4770322 1 1283 0.80 0.3697 --- ’***’ Signif. codes: 0 0.001 ’**’ 0.01 ’*’ 0.05 ’.’ 0.1 ’ ’ 1 The p-value comparing the linear to the quadratic Model 2 is Model 1 − 15 essentially zero ( 10 < ), indicating that a linear fit is not sufficient. Sim- ilarly the p-value comparing the quadratic Model 2 to the cubic Model 3 is very low (0 . 0017), so the quadratic fit is also insufficient. The p-value comparing the cubic and degree-4 polynomials, Model 3 Model 4 ,isap- and Model 5 seems unnecessary proximately 5 % while the degree-5 polynomial because its p-value is 0 . 37. Hence, either a cubic or a quartic polynomial appear to provide a reasonable fit to the data, but lower- or higher-order models are not justified. anova() function, we could have obtained In this case, instead of using the these p-values more succinctly by exploiting the fact that poly() creates orthogonal polynomials. > coef(summary(fit.5)) Estimate Std. Error t value Pr(>|t|) (Intercept) 111.70 0.7288 153.2780 0.000e+00 poly(age, 5)1 447.07 39.9161 11.2002 1.491e-28 poly(age, 5)2 -478.32 39.9161 -11.9830 2.368e-32 poly(age, 5)3 125.52 39.9161 3.1446 1.679e-03

306 7.8 Lab: Non-linear Modeling 291 1.9519 5.105e-02 poly(age, 5)4 -77.91 39.9161 - 3.697e-01 0.8972 poly(age, 5)5 -35.81 39.9161 - Notice that the p-values are the same, and in fact the square of the t-statistics are equal to the F-statistics from the function; for anova() example: > (-11.983)^2 [1] 143.6 However, the ANOVA method works whether or not we used orthogonal polynomials; it also works when we have other terms in the model as well. For example, we can use anova() to compare these three models: education+age,data=Wage) > fit.1=lm(wage ∼ education+poly(age,2),data=Wage) > fit.2=lm(wage ∼ > fit.3=lm(wage education+poly(age,3),data=Wage) ∼ > anova(fit.1,fit.2,fit.3) As an alternative to using hypothesis tests and ANOVA, we could choose the polynomial degree using cross-validation, as discussed in Chapter 5. Next we consider the task of predicting whether an individual earns more than $250 000 per year. We proceed much as before, except that first we , glm() function create the appropriate response vector, and then apply the using family="binomial" in order to fit a polynomial logistic regression model. ∼ poly(age,4),data= Wage, family=binomial) > fit=glm(I(wage>250) Note that we again use the wrapper I() to create this binary response variable on the fly. The expression wage>250 evaluates to a logical variable containing sand FALSE s, which glm() coerces to binary by setting the TRUE TRUE sto1andthe FALSE sto0. Once again, we make predictions using the predict() function. > preds=predict(fit,newdata=list(age=age.grid),se=T) However, calculating the confidence intervals is slightly more involved than glm() model in the linear regression case. The default prediction type for a is type="link" , which is what we use here. This means we get predictions logit : that is, we have fit a model of the form for the ( ) Y =1 | Pr( ) X log = Xβ, Y =1 | X ) 1 − Pr( ˆ and the predictions given are of the form β . The standard errors given are X Y =1 | X ), also of this form. In order to obtain confidence intervals for Pr( we use the transformation ) Xβ exp( . X )= | =1 Y Pr( 1+exp( Xβ )

307 292 7. Moving Beyond Linearity > pfit=exp(preds$fit)/(1+exp(preds$fit)) > se.bands.logit = cbind(preds$fit +2*preds$se.fit, preds$fit -2* preds$se.fit) > se.bands = exp(se.bands.logit)/(1+exp(se.bands.logit)) Note that we could have directly computed the probabilities by selecting option in the predict() function. type="response" the > preds=predict(fit,newdata=list(age=age.grid),type="response", se=T) However, the corresponding confidence intervals would not have been sen- sible because we would end up with negative probabilities! Finally, the right-hand plot from Figure 7.1 was made as follows: > plot(age,I( wage >250),xlim=agelims ,type="n",ylim=c(0,.2)) > points(jitter(age), I((wage>250)/5),cex=.5,pch ="|", col="darkgrey") > lines(age.grid,pfit,lwd=2, col="blue") > matlines(age.grid,se. blue",lty=3) bands,lwd=1,col=" age wage We have drawn the values corresponding to the observations with values above 250 as gray marks on the top of the plot, and those with wage values below 250 are shown as gray marks on the bottom of the plot. We jitter() function to jitter the age values a bit so that observations used the jitter() with the same age value do not cover each other up. This is often called a rug plot . rug plot In order to fit a step function, as discussed in Section 7.2, we use the function. cut() cut() > table(cut(age,4)) (17.9,33.5] (33.5,49] (49,64.5] (64.5,80.1] 750 1399 779 72 ∼ > fit=lm(wage cut(age,4),data=Wage) > coef(summary(fit)) Estimate Std. Error t value Pr(>|t|) (Intercept) 94.16 1.48 63.79 0.00e+00 cut(age, 4)(33.5,49] 24.05 1.83 13.15 1.98e-38 cut(age, 4)(49,64.5] 23.66 2.07 11.44 1.04e-29 cut(age, 4)(64.5,80.1] 7.64 4.99 1.53 1.26e-01 Here cut() automatically picked the cutpoints at 33 . 5, 49, and 64 . 5years of age. We could also have specified our own cutpoints directly using the option. The function cut() returns an ordered categorical variable; breaks the lm() function then creates a set of dummy variables for use in the re- gression. The age<33.5 category is left out, so the intercept coefficient of $94 , 160 can be interpreted as the average salary for those under 33 . 5years of age, and the other coefficients can b e interpreted as the average addi- tional salary for those in the other age groups. We can produce predictions and plots just as we did in the case of the polynomial fit.

308 7.8 Lab: Non-linear Modeling 293 7.8.2 Splines In order to fit regression splines in splines library. In Section R ,weusethe 7.4, we saw that regression splines can be fit by constructing an appropriate bs() function generates the entire matrix of matrix of basis functions. The bs() basis functions for splines with the specified set of knots. By default, cubic wage age using a regression spline is simple: to splines are produced. Fitting > library(splines) ∼ bs(age,knots=c(25,40,60)),data=Wage) > fit=lm(wage > pred=predict(fit,newdata=list(age=age.grid),se=T) > plot(age,wage,col="gray") > lines(age.grid,pred$fit,lwd=2) > lines(age.grid,pred$fit+2*pred$se ,lty="dashed") > lines(age.grid,pred$fit-2*pred$se ,lty="dashed") Here we have prespecified knots at ages 25, 40, and 60. This produces a spline with six basis functions. (Recall that a cubic spline with three knots has seven degrees of freedom; these degrees of freedom are used up by an df option to intercept, plus six basis functions.) We could also use the produce a spline with knots at uniform quantiles of the data. > dim(bs(age,knots=c(25,40,60))) [1] 3000 6 > dim(bs(age,df=6)) [1] 3000 6 > attr(bs(age,df=6),"knots") 25% 50% 75% 33.8 42.0 51.0 In this case R chooses knots at ages 33 . 8 , 42 . 0, and 51 . 0, which correspond age bs() also has . The function to the 25th, 50th, and 75th percentiles of a degree argument, so we can fit splines of any degree, rather than the default degree of 3 (which yields a cubic spline). ns() function. Here In order to instead fit a natural spline, we use the ns() we fit a natural spline with four degrees of freedom. > fit2=lm(wage ∼ ns(age,df=4),data=Wage) > pred2=predict(fit2,newdata=list(age=age.grid),se=T) > lines(age.grid, pred2$fit,col="red",lwd=2) As with the bs() function, we could instead specify the knots directly using the option. knots smooth.spline() function. In order to fit a smoothing spline, we use the smooth. Figure 7.8 was produced with the following code: spline() > plot(age,wage,xlim=agelims ,cex=.5,col="darkgrey") > title("Smoothing Spline") > fit=smooth.spline(age,wage,df=16) > fit2=smooth.spline(age,wage,cv=TRUE) > fit2$df [1] 6.8 > lines(fit,col="red",lwd=2)

309 294 7. Moving Beyond Linearity > lines(fit2,col="blue",lwd=2) > legend("topright",legend=c("16 DF" ,"6.8 DF"), col=c("red","blue"),lty=1,lwd=2,cex=.8) smooth.spline() df=16 .The Notice that in the first call to , we specified function then determines which value of λ leads to 16 degrees of freedom. In , we select the smoothness level by cross- smooth.spline() the second call to λ that yields 6.8 degrees of freedom. validation; this results in a value of loess() function. In order to perform local regression, we use the loess() > plot(age,wage,xlim=agelims ,cex=.5,col="darkgrey") > title("Local Regression ") ∼ > fit=loess(wage age,span=.2,data=Wage) ∼ > fit2=loess(wage age,span=.5,data=Wage) > lines(age.grid,predict(fit,data.frame(age=age.grid)), col="red",lwd=2) > lines(age.grid,predict(fit2,data.frame(age=age.grid)), col="blue",lwd=2) > legend("topright",legend=c("Span=0.2","Span=0.5"), col=c("red","blue"),lty=1,lwd=2,cex=.8) Here we have performed local linear regression using spans of 0 2and0 . 5: . that is, each neighborhood consists of 20 % or 50 % of the observations. The locfit library can also be used larger the span, the smoother the fit. The for fitting local regression models in R . 7.8.3 GAMs We now fit a GAM to predict using natural spline functions of year wage age education as a qualitative predictor, as in (7.16). Since , treating and this is just a big linear regression model using an appropriate choice of lm() function. basis functions, we can simply do this using the ∼ ns(year,4)+ns(age,5)+education,data=Wage) > gam1=lm(wage We now fit the model (7.16) using smoothing splines rather than natural splines. In order to fit more general sorts of GAMs, using smoothing splines or other components that cannot be expressed in terms of basis functions gam and then fit using least squares regression, we will need to use the library in R . s() gam library, is used to indicate that function, which is part of the The s() we would like to use a smoothing spline. We specify that the function of year should have 4 degrees of freedom, and that the function of age will is qualitative, we leave it as is, education have 5 degrees of freedom. Since and it is converted into four dummy variables. We use the gam() function in gam() order to fit a GAM using these components. All of the terms in (7.16) are fit simultaneously, taking each other into account to explain the response. > library(gam) > gam.m3=gam(wage ∼ s(year,4)+s(age,5)+education,data=Wage)

310 7.8 Lab: Non-linear Modeling 295 function: In order to produce Figure 7.12, we simply call the plot() > par(mfrow=c(1,3)) > plot(gam.m3, se=TRUE,col=" blue") gam The generic plot() , function recognizes that gam2 is an object of class and invokes the appropriate plot.gam() method. Conveniently, even though plot.gam() gam but rather of class lm is not of class still use plot.gam() gam1 ,wecan on it. Figure 7.11 was produced using the following expression: > plot.gam(gam1, se=TRUE, col="red") Notice here we had to use generic plot() plot.gam() rather than the function. year looks rather linear. We can perform a In these plots, the function of series of ANOVA tests in order to determine which of these three models is year ( M ), a GAM that uses a linear function best: a GAM that excludes 1 of ( M ). year ( M year ), or a GAM that uses a spline function of 3 2 > gam.m1=gam(wage ∼ s(age,5)+education,data=Wage) ∼ year+s(age,5)+education,data=Wage) > gam.m2=gam(wage > anova(gam.m1,gam.m2,gam.m3,test="F") Analysis of Deviance Table Model 1: wage ∼ s(age, 5) + education Model 2: wage ∼ year + s(age, 5) + education ∼ Model 3: wage s(year, 4) + s(age, 5) + education Resid. Df Resid. Dev Df Deviance F Pr(>F) 1 2990 3711730 2 2989 3693841 1 17889 14.5 0.00014 *** 3 2986 3689770 3 4071 1.1 0.34857 --- ’***’ 0.001 ’**’ 0.01 ’*’ 0.05 ’.’ 0.1 ’ ’ 1 Signif. codes: 0 We find that there is compelling evidence that a GAM with a linear func- tion of year is better than a GAM that does not include year at all (p-value = 0.00014). However, there is no evidence that a non-linear func- is needed (p-value = 0.349). In other words, based on the results year tion of M of this ANOVA, is preferred. 2 summary() function produces a summary of the gam fit. The > summary(gam.m3) Call: gam(formula = wage s(year, 4) + s(age, 5) + education, ∼ data = Wage) Deviance Residuals: Min 1Q Median 3Q Max -119.43 -19.70 -3.33 14.17 213.48 (Dispersion Parameter for gaussian family taken to be 1236) Null Deviance: 5222086 on 2999 degrees of freedom Residual Deviance: 3689770 on 2986 degrees of freedom

311 296 7. Moving Beyond Linearity AIC: 29888 Number of Local Scoring Iterations: 2 Effects DF for Terms and F-values for Nonparametric Df Npar Df Npar F Pr(F) (Intercept) 1 s(year, 4) 1 3 1.1 0.35 s(age, 5) 1 4 32.4 <2e-16 *** education 4 --- 0.001 ’**’ 0.01 ’*’ 0.05 ’.’ 0.1 ’ ’ 1 Signif. codes: 0 ’***’ and age correspond to a null hypothesis of a linear year The p-values for relationship versus the alternative of a non-linear relationship. The large year reinforces our conclusion from the ANOVA test that a lin- p-value for ear function is adequate for this term. However, there is very clear evidence age . that a non-linear term is required for We can make predictions from gam objects, just like from lm objects, predict() method for the class gam .Herewemakepredictionson using the the training set. > preds=predict(gam.m2,newdata=Wage) We can also use local regression fits as building blocks in a GAM, using the lo() function. lo() > gam.lo=gam(wage s(year,df=4)+lo(age,span=0.7)+education, ∼ data=Wage) > plot.gam(gam.lo, se=TRUE, col=" green") Here we have used local regression for the age term, with a span of 0 . 7. lo() function to create interactions before calling the We can also use the gam() function. For example, > gam.lo.i=gam(wage ∼ lo(year,age, span=0.5)+education, data=Wage) fits a two-term model, in which the first term is an interaction between year age , fit by a local regression surface. We can plot the resulting and akima package. two-dimensional surface if we first install the > library(akima) > plot(gam.lo.i) In order to fit a logistic regression GAM, we once again use the I() func- tion in constructing the binary response variable, and set family=binomial . > gam.lr=gam(I(wage>250) ∼ year+s(age,df=5)+education, family=binomial,data=Wage) > par(mfrow=c(1,3)) > plot(gam.lr,se=T,col="green")

312 7.9 Exercises 297 table(education,I(wage>250)) education FALSE TRUE 1. < HS Grad 268 0 2. HS Grad 966 5 3. Some College 643 7 4. College Grad 663 22 5. Advanced Degree 381 45 Hence, we fit a logistic regression GAM using all but this category. This provides more sensible results. > gam.lr.s=gam(I(wage>250) ∼ year+s(age,df=5)+education,family= subset=(education !="1. < HS Grad")) binomial,data= Wage, > plot(gam.lr.s,se=T,col="green") 7.9 Exercises Conceptual 1. It was mentioned in the chapter that a cubic regression spline with 3 2 , can be obtained using a basis of the form x one knot at x ξ , x , 3 3 3 ) ( ,where( x − ξ x − ξ =( x − ξ ) ) if x>ξ and equals 0 otherwise. + + We will now show that a function of the form 3 3 2 f ( x )= β x β x β + β x + + β ( x − ξ ) + 1 4 2 0 3 + is indeed a cubic regression spline, regardless of the values of β ,β , ,β 2 1 0 β ,β . 3 4 (a) Find a cubic polynomial 3 2 f )= a x ( b d x + c + x x + 1 1 1 1 1 such that f ( x )= f in ( x ) for all x ≤ ξ .Express a ,d ,b ,c 1 1 1 1 1 ,β ,β ,β ,β . β terms of 4 2 1 0 3 (b) Find a cubic polynomial 3 2 f )= ( x + b d x + c + x x a 2 2 2 2 2 such that f ( x )= f in ( x ) for all x>ξ .Express a ,d ,b ,c 2 2 2 2 2 terms of β ,β )is ,β x ,β ( ,β f . We have now established that 4 2 1 3 0 a piecewise polynomial. ) is continuous at ( ξ )= . ξ ( ξ ). That is, f ( x f f (c) Show that 2 1 ′ ′ ′ (d) Show that f ξ )= f . ) is continuous at ξ ξ ). That is, f ( ( x ( 2 1

313 298 7. Moving Beyond Linearity ′′ ′′ ′′ ξ )= f f (e) Show that ( ξ ). That is, f ( ( x ) is continuous at ξ . 2 1 Therefore, ( x ) is indeed a cubic spline. f Hint: Parts (d) and (e) of this problem require knowledge of single- variable calculus. As a reminder, given a cubic polynomial 3 2 f x + b )= x + c , x ( + d x a 1 1 1 1 1 the first derivative takes the form 2 ′ f x ( b +2 c )= x +3 d x 1 1 1 1 and the second derivative takes the form ′′ f x ( c +6 d x. )=2 1 1 1 g 2. Suppose that a curve ˆ n points is computed to smoothly fit a set of using the following formula: ) ( ∫ n [ ] ∑ 2 2 ( m ) g , ( y λ + − g ( x dx ( x ) )) =argmin g ˆ i i g =1 i m ( (0) ) g where ). Provide th derivative of g m = g (and represents the g g in each of the following scenarios. example sketches of ˆ λ = ∞ ,m =0. (a) (b) λ ∞ ,m =1. = λ ,m ∞ (c) =2. = (d) = ∞ ,m =3. λ (e) λ =0 ,m =3. 3. Suppose we fit a curve with basis functions b ( X X , b )= ( X )= 2 1 2 ( − 1) X X ≥ 1). (Note that I ( X ≥ 1) equals 1 for X ≥ 1and0 ( I otherwise.) We fit the linear regression model )+ X ( + β b b , ( X )+ β β = Y 2 1 2 0 1 ˆ ˆ ˆ and obtain coefficient estimates β =1 β − 2. Sketch the , =1 β , = 1 2 0 X = estimated curve between 2and X = 2. Note the intercepts, − slopes, and other relevant information. − 2) ≤ ( X )= I (0 ≤ X b 4. Suppose we fit a curve with basis functions 1 X X I (1 ≤ 1) ≤ 2), b ( − ( X )=( X 5). 3) I (3 ≤ X ≤ 4)+ I (4

314 7.9 Exercises 299 g g , defined by 5. Consider two curves, ˆ and ˆ 2 1 ) ( ∫ n [ ] ∑ 2 2 (3) g g ˆ ( x =argmin )) , + λ ( y dx − ( x ) g i 1 i g =1 i ) ( ∫ n [ ] ∑ 2 2 (4) g g ˆ g ( x , )) dx + λ ( y =argmin − ( x ) i 2 i g =1 i ) ( m g where represents the m th derivative of g . have the smaller training RSS? or ˆ g , will ˆ g (a) As λ →∞ 2 1 λ →∞ g (b) As , will ˆ g have the smaller test RSS? or ˆ 2 1 or ˆ g have the smaller training and test RSS? λ = 0, will ˆ g (c) For 2 1 Applied 6. In this exercise, you will further analyze the data set considered Wage throughout this chapter. wage age .Use using (a) Perform polynomial regression to predict ect the optimal degree d for the polyno- cross-validation to sel mial. What degree was chosen, and how does this compare to the results of hypothesis testing using ANOVA? Make a plot of the resulting polynomial fit to the data. wage using age , and perform cross- (b) Fit a step function to predict validation to choose the optimal number of cuts. Make a plot of the fit obtained. data set contains a number of other features not explored Wage 7. The maritl jobclass ), ), job class ( in this chapter, such as marital status ( and others. Explore the relation ships between some of these other predictors and wage , and use non-linear fitting techniques in order to fit flexible models to the data. Create plots of the results obtained, and write a summary of your findings. 8. Fit some of the non-linear models investigated in this chapter to the Auto data set. Is there evidence for non-linear relationships in this data set? Create some informative plots to justify your answer. (the weighted mean of distances dis 9. This question uses the variables to five Boston employment centers) and nox (nitrogen oxides concen- Boston data. We will treat tration in parts per 10 million) from the dis as the predictor and nox as the response. poly() function to fit a cubic polynomial regression to (a) Use the nox using dis . Report the regression output, and plot predict the resulting data and polynomial fits.

315 300 7. Moving Beyond Linearity (b) Plot the polynomial fits for a range of different polynomial degrees (say, from 1 to 10), and report the associated residual sum of squares. (c) Perform cross-validation or a nother approach to select the opti- mal degree for the polynomial, and explain your results. bs() nox function to fit a regression spline to predict (d) Use the using dis . Report the output for the fit using four degrees of freedom. How did you choose the knots? Plot the resulting fit. (e) Now fit a regression spline for a range of degrees of freedom, and plot the resulting fits and report the resulting RSS. Describe the results obtained. (f) Perform cross-validation or a nother approach in order to select the best degrees of freedom for a regression spline on this data. Describe your results. College data set. 10. This question relates to the (a) Split the data into a training set and a test set. Using out-of-state tuition as the response and the other variables as the predictors, perform forward stepwise selection on the training set in order to identify a satisfactory model that uses just a subset of the predictors. (b) Fit a GAM on the training data, using out-of-state tuition as the response and the features selected in the previous step as the predictors. Plot the results, and explain your findings. (c) Evaluate the model obtained on the test set, and explain the results obtained. s there evidence of a non-linear (d) For which variables, if any, i relationship with the response? 11. In Section 7.7, it was mentioned that GAMs are generally fit using a backfitting approach. The idea behind backfitting is actually quite simple. We will now explore backfitting in the context of multiple linear regression. Suppose that we would like to perform multiple linear regression, but we do not have software to do so. Instead, we only have software to perform simple linear regression. Therefore, we take the following iterative approach: we repeatedly hold all but one coefficient esti- mate fixed at its current value, and update only that coefficient estimate using a simple linear regre ssion. The process is continued un- til convergence —that is, until the coefficien t estimates stop changing. We now try this out on a toy example.

316 7.9 Exercises 301 Y and two predictors (a) Generate a response and X X , with 1 2 n = 100. ˆ to take on a value of your choice. It does not matter (b) Initialize β 1 what value you choose. ˆ β (c) Keeping fixed, fit the model 1 ˆ Y − β X . = β + + β X 2 1 0 1 2 You can do this as follows: > a=y-beta1*x1 ∼ x2)$coef[2] > beta2=lm(a ˆ β (d) Keeping fixed, fit the model 2 ˆ Y − β . X + = β X + β 1 0 2 2 1 You can do this as follows: > a=y-beta2*x2 ∼ x1)$coef[2] > beta1=lm(a (e) Write a for loop to repeat (c) and (d) 1,000 times. Report the ˆ ˆ ˆ β estimates of at each iteration of the for loop. β , β ,and 1 0 2 ˆ , ese values is displayed, with Create a plot in which each of th β 0 ˆ ˆ β ,and β each shown in a different color. 1 2 (f) Compare your answer in (e) to the results of simply performing and .Use X multiple linear regression to predict Y X using 2 1 the abline() function to overlay those multiple linear regression coefficient estimates on th e plot obtained in (e). (g) On this data set, how many backfitting iterations were required in order to obtain a “good” approximation to the multiple re- gression coeffici ent estimates? 12. This problem is a continuation of the previous exercise. In a toy example with p = 100, show that one can approximate the multiple linear regression coeffici ent estimates by repeat edly performing simple linear regression in a backfitting procedure. How many backfitting iterations are required in order to obtain a “good” approximation to the multiple regression coefficient e stimates? Create a plot to justify your answer.

317

318 8 Tree-Based Methods tree-based In this chapter, we describe methods for regression and classification. These involve stratifying or segmenting the predictor space into a number of simple regions. In order to make a prediction for a given observation, we typically use the mean or the mode of the training observa- tions in the region to which it belongs. Since the set of splitting rules used e summarized in a tree, these types of to segment the predictor space can b approaches are known as decision tree methods. decision tree Tree-based methods are simple and useful for interpretation. However, they typically are not competitive with the best supervised learning ap- proaches, such as those seen in Chapters 6 and 7, in terms of prediction accuracy. Hence in this chapter we also introduce bagging , random forests , and . Each of these approaches involves producing multiple trees boosting which are then combined to yield a si ngle consensus prediction. We will see that combining a large number of trees can often result in dramatic improvements in prediction accuracy , at the expense of some loss in inter- pretation. 8.1 The Basics of Decision Trees Decision trees can be applied to both reg ression and classification problems. We first consider regression problems, and then move on to classification. G. James et al., An Introduction to Statistical Learning: with Applications in R , 303 8, Springer Texts in Statistics 103, DOI 10.1007/978-1-4614-7138-7 © Springer Science+Business Media New York 2013

319 304 8. Tree-Based Methods Years < 4.5 | Hits < 117.5 5.11 6.00 6.74 FIGURE 8.1. For the data, a regression tree for predicting the log Hitters salary of a baseball player, based on the number of years that he has played in the major leagues and the number of hits that he made in the previous year. At a ) indicates the left-hand branch =4.5 leaves. The number in each leaf is the mean of the response for the observations that fall there. 8.1.1 Regression Trees In order to motivate regression trees , we begin with a simple example. regression tree Salaries Using Regression Trees Predicting Baseball Players’ We use the Hitters data set to predict a baseball player’s Salary based on (the number of years that he has played in the major leagues) and Years (the number of hits that he made in the previous year). We first remove Hits observations that are missing Salary values, and log-transform Salary so Salary that its distribution has more of a typical bell-shape. (Recall that is measured in thousands of dollars.) Figure 8.1 shows a regression tree fit to this data. It consists of a series of splitting rules, starting at the top of the tree. The top split assigns 1 Years<4.5 to the left branch. The predicted salary observations having 1 Both Years and Hits are integers in these data; the tree() function in R labels the splits at the midpoint between two adjacent values.

320 8.1 The Basics of Decision Trees 305 238 R 3 117.5 R 1 Hits R 2 1 24 14.5 Years Hitters data set from the The three-region partition for the FIGURE 8.2. regression tree illustrated in Figure 8.1. for these players is given by the mean response value for the players in the data set with . 107, . For such players, the mean log salary is 5 Years<4.5 5 . 107 and so we make a prediction of e thousands of dollars, i.e. $165,174, for Years>=4.5 are assigned to the right branch, and these players. Players with then that group is further subdivided by Hits . Overall, the tree stratifies or segments the players into three reg ions of predictor space: players who have played for four or fewer years, players who have played for five or more years and who made fewer than 118 hits last year, and players who have played for five or more years and who made at least 118 hits last year. These Years>=4.5 | = { X , Years<4.5 } , R X = { | R three regions can be written as 2 1 Hits<117.5 } ,and R . Figure 8.2 illustrates = { X | Years>=4.5 , Hits>=117.5 } 3 Hits . The predicted salaries for these Years and the regions as a function of . 107 5 . 999 5 e × three groups are $1,000 =$165,174, $1,000 × e =$402,834, and 6 . 740 =$845,346 respectively. × e $1,000 tree analogy, the regions R In keeping with the R ,and R , are known 1 2 3 as terminal nodes leaves of the tree. As is the case for Figure 8.1, decision or terminal upside down trees are typically drawn , in the sense that the leaves are at node leaf the bottom of the tree. The points along the tree where the predictor space is split are referred to as internal nodes . In Figure 8.1, the two internal internal node nodes are indicated by the text Years<4.5 and Hits<117.5 . We refer to the segments of the trees that connect the nodes as . branches branch We might interpret the regression tree displayed in Figure 8.1 as follows: Years is the most important factor in determining Salary , and players with less experience earn lower salaries t han more experienced players. Given that a player is less experienced, the number of hits that he made in the previous year seems to play little role in his salary. But among players who

321 306 8. Tree-Based Methods have been in the major leagues for five or more years, the number of hits made in the previous year does affect salary, and players who made more hits last year tend to have higher salaries. The regression tree shown in Figure 8.1 is likely an over-simplifica tion of the true relationship between Salary . However, it has advantages over other types of Hits Years , ,and regression models (such as those seen in Chapters 3 and 6): it is easier to interpret, and has a nice graphical representation. Prediction via Stratification of the Feature Space regression tree. Roughly speaking, We now discuss the process of building a there are two steps. 1. We divide the predictor space—that is, the set of possible values for X ,X ,...,X —into J distinct and non-overlapping regions, p 2 1 R ,R ,...,R . 2 J 1 ,wemakethesame 2. For every observation that falls into the region R j prediction, which is simply the mean of the response values for the . training observations in R j For instance, suppose that in Step 1 we obtain two regions, R R and , 2 1 and that the response mean of the training observations in the first region is 10, while the response mean of the training observations in the second we will predict region is 20. Then for a given observation x ,if x ∈ R X = 1 we will predict a value of 20. a value of 10, and if x ∈ R 2 We now elaborate on Step 1 above. How do we construct the regions ? In theory, the regions could have any shape. However, we ,...,R R 1 J choose to divide the predictor space into high-dimensional rectangles, or , for simplicity and for ease of interpretation of the resulting predic- boxes ,...,R that minimize the RSS, R tive model. The goal is to find boxes 1 J given by J ∑ ∑ 2 (8.1) , ) y ˆ ( y − i R j =1 j ∈ i R j where ˆ y is the mean response for the training observations within the R j j th box. Unfortunately, it is computationally infeasible to consider every possible partition of the feature space into J boxes. For this reason, we take a top-down , greedy approach that is known as recursive binary splitting .The recursive approach is top-down because it begins at the top of the tree (at which point binary splitting all observations belong to a single region) and then successively splits the predictor space; each split is indicated via two new branches further down on the tree. It is greedy because at each step of the tree-building process, the best split is made at that particular step, rather than looking ahead and picking a split that will lead to a better tree in some future step.

322 8.1 The Basics of Decision Trees 307 y splitting, we first select the pre- In order to perform recursive binar dictor X and the cutpoint s such that splitting the predictor space into j X X { the regions | and { X | X s ≥

323 308 8. Tree-Based Methods R 5 t R 2 4 2 2 X X R 3 R t 2 4 R 1 t t 1 3 X X 1 1 X ≤ t 1 1 | ≤ X t 2 2 X ≤ t 1 3 t ≤ X 2 4 R R R 3 2 1 X 2 X 1 R R 5 4 Top Left: A partition of two-dimensional feature space that could FIGURE 8.3. Top Right: The output of recursive not result from recursive binary splitting. Bottom Left: A tree corresponding binary splitting on a two-dimensional example. to the partition in the top right panel. Bottom Right: A perspective plot of the prediction surface corresponding to that tree. Therefore, a better strategy is to grow a very large tree T ,andthen 0 prune subtree . How do we determine the best it back in order to obtain a prune way to prune the tree? Intuitively, our goal is to select a subtree that subtree Given a subtree, we can estimate its leads to the lowest test error rate. test error using cross-validation or the validation set approach. However, estimating the cross-validation error for every possible subtree would be too cumbersome, since there is an extreme ly large number of possible subtrees. Instead, we need a way to select a small set of subtrees for consideration. Cost complexity pruning weakest link pruning —gives us —also known as cost a way to do just this. Rather than considering every possible subtree, we complexity pruning consider a sequence of trees indexed by a nonnegative tuning parameter α . weakest link pruning

324 8.1 The Basics of Decision Trees 309 Algorithm 8.1 Building a Regression Tree 1. Use recursive binary splitting to grow a large tree on the training data, stopping only when each terminal node has fewer than some minimum number of observations. 2. Apply cost complexity pruning to the large tree in order to obtain a sequence of best subtrees, as a function of α . α . That is, divide the training 3. Use K-fold cross-validation to choose folds. For each k observations into ,...,K : K =1 k th fold of the training data. (a) Repeat Steps 1 and 2 on all but the (b) Evaluate the mean squared prediction error on the data in the left-out k th fold, as a function of α . α , and pick α to minimize the Average the results for each value of average error. 4. Return the subtree from Step 2 that corresponds to the chosen value . α of there corresponds a subtree T ⊂ T For each value of such that α 0 T | | ∑ ∑ 2 ) | α + | T (8.4) ( y − ˆ y i R m m =1 x i ∈ R : i m T | indicates the number of terminal nodes is as small as possible. Here | T , R of the tree is the rectangle (i.e. the subs et of predictor space) cor- m responding to the m th terminal node, and ˆ y is the predicted response R m R associated with —that is, the mean of the training observations in R . m m The tuning parameter α controls a trade-off between the subtree’s com- α T = 0, then the subtree plexity and its fit to the training data. When , because then (8.4) just measures the training error. T will simply equal 0 However, as α increases, there is a price t o pay for having a tree with many terminal nodes, and so the quantity (8.4) will tend to be minimized for a smaller subtree. Equation 8.4 is reminiscent of the lasso (6.7) from Chapter 6, in which a similar formulation was used in order to control the complexity of a linear model. It turns out that as we increase α from zero in (8.4), branches get pruned from the tree in a nested and predictable fashion, so obtaining the whole sequence of subtrees as a function of α is easy. We can select a value of α using a validation set or using cross-validation. We then return to the full data set and obtain the subtree corresponding to α . This process is summarized in Algorithm 8.1.

325 310 8. Tree-Based Methods Years < 4.5 | Hits < 117.5 RBI < 60.5 Putouts < 82 Years < 3.5 Years < 3.5 5.394 6.189 5.487 5.183 4.622 Walks < 43.5 Walks < 52.5 Runs < 47.5 RBI < 80.5 6.407 Years < 6.5 6.549 6.015 5.571 7.289 6.459 7.007 FIGURE 8.4. Hitters data. The unpruned tree Regression tree analysis for the that results from top-down greedy splitting on the training data is shown. Figures 8.4 and 8.5 display the results of fitting and pruning a regression tree on the Hitters data, using nine of the features. First, we randomly divided the data set in half, yielding 132 observations in the training set and 131 observations in the test set. We then built a large regression tree on the training data and varied in (8.4) in order to cr eate subtrees with α inally, we performed six-fold cross- different numbers of terminal nodes. F validation in order to estimate the cross-validated MSE of the trees as α . (We chose to perform six-fold cross-validation because a function of 132 is an exact multiple of six.) The unpruned regression tree is shown in Figure 8.4. The green curve in Figure 8.5 shows the CV error as a 2 while the orange curve indicates the function of the number of leaves, test error. Also shown are standard error bars around the estimated errors. For reference, the training error cur ve is shown in black. The CV error is a reasonable approximation of the test error: the CV error takes on its 2 Although CV error is computed as a function of α , it is convenient to display the result as a function of | T | , the number of leaves; this is based on the relationship between α and | T | in the original tree grown to all the training data.

326 8.1 The Basics of Decision Trees 311 Training 1.0 Cross−Validation Test 0.8 0.6 0.4 Mean Squared Error 0.2 0.0 246810 Tree Size Regression tree analysis for the FIGURE 8.5. data. The training, Hitters cross-validation, and test MSE are shown as a function of the number of termi- nal nodes in the pruned tree. Standard error bands are displayed. The minimum cross-validation error occurs at a tree size of three. minimum for a three-node tree, while the test error also dips down at the three-node tree (though it takes on its lowest value at the ten-node tree). The pruned tree containing three terminal nodes is shown in Figure 8.1. 8.1.2 Classification Trees A classification tree is very similar to a regression tree, except that it is classification used to predict a qualitative response rather than a quantitative one. Re- tree call that for a regression tree, the pre dicted response for an observation is given by the mean response of the training observations that belong to the same terminal node. In contrast, for a classification tree, we predict that most commonly occurring class of training each observation belongs to the observations in the region to which it belongs. In interpreting the results of a classification tree, we are often intere sted not only in the class prediction corresponding to a particular terminal node region, but also in the class proportions among the training observations that fall into that region. The task of growing a classification tree is quite similar to the task of growing a regression tree. Just as in th e regression setting, we use recursive binary splitting to grow a classification tree. However, in the classification setting, RSS cannot be used as a criterion for making the binary splits. A natural alternative to RSS is the classification error rate .Sinceweplan classification to assign an observation in a given region to the most commonly occurring error rate class of training observations in that region, the classification error rate is simply the fraction of the training observations in that region that do not belong to the most common class:

327 312 8. Tree-Based Methods − max E (ˆ p =1 ) . (8.5) mk k Here ˆ represents the proportion of training observations in the m th p mk k region that are from the th class. However, it turns out that classification error is not sufficiently sensitive for tree-growing, and in practice two other measures are preferable. Gini index The is defined by Gini index K ∑ G = (8.6) ˆ p , (1 − ˆ p ) mk mk k =1 classes. It is not hard to see a measure of total variance across the K ’s are close to that the Gini index takes on a small value if all of the ˆ p mk zero or one. For this reason the Gini index is referred to as a measure of purity —a small value indicates that a node contains predominantly node observations from a single class. cross-entropy ,givenby An alternative to the Gini index is cross- entropy K ∑ p = D . ˆ p − log ˆ (8.7) mk mk =1 k log ˆ ≤ 1, it follows that 0 ≤− ˆ p p . One can show that ≤ p Since 0 ˆ mk mk mk ’s are all near the cross-entropy will take on a value near zero if the ˆ p mk zero or near one. Therefore, like the Gini index, the cross-entropy will take on a small value if the th node is pure. In fact, it turns out that the Gini m index and the cross-entropy are quite similar numerically. When building a classification tree, either the Gini index or the cross- entropy are typically used to evaluate the quality of a particular split, since these two approaches are more sensitive to node purity than is the classification error rate. Any of these three approaches might be used when pruning the tree, but the classification error rate is preferable if prediction accuracy of the final pruned tree is the goal. Heart data set. These data con- Figure 8.6 shows an example on the tain a binary outcome for 303 patients who presented with chest pain. HD An outcome value of Yes indicates the presence of heart disease based on No means no heart disease. There are 13 predic- an angiographic test, while tors including Age , Sex , Chol (a cholesterol measurement), and other heart and lung function measurements. Cross-validation results in a tree with six terminal nodes. In our discussion thus far, we have assumed that the predictor vari- ables take on continuous values. However, decision trees can be constructed even in the presence of qualitative predictor variables. For instance, in the Heart data, some of the predictors, such as Sex , Thal (Thalium stress test), ChestPain , are qualitative. Therefore, a split on one of these variables and amounts to assigning some of the qualitative values to one branch and

328 8.1 The Basics of Decision Trees 313 Thal:a | Ca < 0.5 Ca < 0.5 Oldpeak < 1.1 Slope < 1.5 Age < 52 RestECG < 1 Thal:b MaxHR < 161.5 ChestPain:bc ChestPain:a Ye s RestBP < 157 Yes Yes No Yes No Chol < 244 Chol < 244 Sex < 0.5 No MaxHR < 156 Ye s No Ye s MaxHR < 145.5 No No No No Yes No No Yes Thal:a | Training 0.6 Cross−Validation Test 0.5 0.4 0.3 Error Ca < 0.5 Ca < 0.5 0.2 Ye s Ye s MaxHR < 161.5 ChestPain:bc 0.1 No No 0.0 Yes No 51015 Tree Size FIGURE 8.6. Heart data. Top: The unpruned tree. Bottom Left: Cross -validation error, training, and test error, for different sizes of the pruned tree. Bottom Right: The pruned tree corresponding to the minimal cross-validation error. assigning the remaining to the other branch. In Figure 8.6, some of the in- ternal nodes correspond to splitting qualitative variables. For instance, the Thal .Thetext Thal:a indicates top internal node corresponds to splitting that the left-hand branch coming out of that node consists of observations with the first value of the Thal variable (normal), and the right-hand node consists of the remaining observations (fixed or reversible defects). The text ChestPain:bc two splits down the tree on the left indicates that the left-hand branch coming out of that node consists of observations with the second ChestPain variable, where the possible values are and third values of the typical angina, atypical angina, non-anginal pain, and asymptomatic.

329 314 8. Tree-Based Methods Figure 8.6 has a surprising characteristic: some of the splits yield two same predicted value terminal nodes that have the . For instance, consider the split RestECG<1 near the bottom right of the unpruned tree. Regardless , a response value of Yes RestECG is predicted for those ob- of the value of servations. Why, then, is the split performed at all? The split is performed because it leads to increased node purity . That is, all 9 of the observations Yes ,whereas corresponding to the right-hand leaf have a response value of / 11 of those corresponding to the left-hand leaf have a response value of 7 . Why is node purity important? Suppose that we have a test obser- Yes vation that belongs to the region given by that right-hand leaf. Then we can be pretty certain that its response value is Yes . In contrast, if a test observation belongs to the region given by the left-hand leaf, then its re- Yes , but we are much less certain. Even though sponse value is probably the split does not reduce the classification error, it improves the RestECG<1 Gini index and the cross-entropy, which are more sensitive to node purity. 8.1.3 Trees Versus Linear Models Regression and classification trees have a very different flavor from the more classical approaches for regression and classification presented in Chapters 3 and 4. In particular, linear regression assumes a model of the form p ∑ β X (8.8) , + β f )= X ( 0 j j =1 j ssume a model of the form whereas regression trees a M ∑ )= f ( X (8.9) c · 1 m ( X ∈ R ) m =1 m R where ,...,R ture space, as in Figure 8.3. represent a partition of fea M 1 Which model is better? It depends on the problem at hand. If the relationship between the features a nd the response is well approximated by a linear model as in (8.8), then an approach such as linear regression will likely work well, and will outperform a method such as a regression tree that does not exploit this linear structure. If instead there is a highly non-linear and complex relationship b etween the features and the response as indicated by model (8.9), then deci sion trees may outperform classical approaches. An illustrative example is displayed in Figure 8.7. The rela- tive performances of tree-based and classical approaches can be assessed by estimating the test error, using either cross-validation or the validation set approach (Chapter 5). Of course, other consid erations beyond simply test error may come into play in selecting a statistical learning method; for instance, in certain set- tings, prediction using a tree may be preferred for the sake of interpretabil- ity and visualization.

330 8.1 The Basics of Decision Trees 315 2 2 1 1 2 2 0 0 X X −1 −1 −2 −2 0 −1 1 2 −2 2 1 0 −1 −2 X X 1 1 2 2 1 1 2 2 0 0 X X −1 −1 −2 −2 0 2 1 −1 −2 −1 0 1 2 −2 X X 1 1 Top Row: FIGURE 8.7. A two-dimensional classification example in which the true decision boundary is linear, and is indicated by the shaded regions. A classical approach that assumes a linear boundary (left) will outperform a de- cision tree that performs splits parallel to the axes (right). Here the Bottom Row: true decision boundary is non-linear. Here a linear model is unable to capture the true decision boundary (left), whereas a decision tree is successful (right). 8.1.4 Advantages and Disadvantages of Trees Decision trees for regression and classification have a number of advantages over the more classical approaches seen in Chapters 3 and 4: Trees are very easy to explain to people. In fact, they are even easier ▲ to explain than linear regression! ▲ Some people believe that decision trees more closely mirror human decision-making than do the regression and classification approaches seen in previous chapters. ▲ Trees can be displayed graphically, and are easily interpreted even by a non-expert (especially if they are small). ▲ Trees can easily handle qualitative predictors without the need to create dummy variables.

331 316 8. Tree-Based Methods ▼ Unfortunately, trees generally do not have the same level of predictive accuracy as some of the other regression and classification approaches seen in this book. , However, by aggregating many decision trees, using methods like bagging , the predictive perfor random forests ,and boosting mance of trees can be substantially improved. We introduce these concepts in the next section. 8.2 Bagging, Random Forests, Boosting Bagging, random forests, and boosting use trees as building blocks to construct more powerfu l prediction models. 8.2.1 Bagging The bootstrap, introduced in Chapter 5, is an extremely powerful idea. It is used in many situations in which it is hard or even impossible to directly compute the standard deviation of a quantity of interest. We see here that the bootstrap can be used in a completely different context, in order to improve statistical learning methods such as decision trees. high variance . The decision trees discussed in Section 8.1 suffer from This means that if we split the training data into two parts at random, , the results that we get could be and fit a decision tree to both halves rast, a procedure with quite different. In cont low variance will yield similar results if applied repeatedly to distinct data sets; linear regression tends n p to have low variance, if the ratio of Bootstrap to is moderately large. bagging , is a general-purpose procedure for reducing the ,or aggregation bagging variance of a statistical learning method; we introduce it here because it is particularly useful and frequently u sed in the context of decision trees. ,...,Z ,each Recall that given a set of n independent observations Z 1 n 2 ̄ σ with variance Z of the observations is given , the variance of the mean 2 . /n .Inotherwords, averaging a set of observations reduces variance σ by Hence a natural way to reduce the vari ance and hence increase the predic- tion accuracy of a statistical learning method is to take many training sets from the population, build a separate prediction model using each training set, and average the resulting predictions. In other words, we could cal- B 2 1 ˆ ˆ ˆ x ( f ) ( x ) ,..., x f separate training sets, and average ( , )using B culate f them in order to obtain a single low-variance statistical learning model, given by B ∑ 1 b ˆ ˆ f x . ) ( )= ( x f avg B =1 b Of course, this is not practical beca use we generally do not have access to multiple training sets. Instead, we can bootstrap, by taking repeated

332 8.2 Bagging, Random Forests, Boosting 317 samples from the (single) training data set. In this approach we generate B different bootstrapped training data sets. We then train our method on ∗ b ˆ b th bootstrapped training set in order to get f the ), and finally average ( x all the predictions, to obtain B ∑ 1 ∗ b ˆ ˆ ) )= . ( x x ( f f bag B =1 b This is called bagging. While bagging can improve predictions for many regression methods, it is particularly useful for decision trees. To apply bagging to regression B regression trees using B bootstrapped training trees, we simply construct sets, and average the resulting predic tions. These trees are grown deep, and are not pruned. Hence each individual tree has high variance, but low bias. Averaging these B trees reduces the variance. Bagging has been demonstrated to give impressive impr ovements in accuracy by combining together hundreds or even thousands of trees into a single procedure. Thus far, we have described the bagging procedure in the regression context, to predict a quantitative outcome Y . How can bagging be extended to a classification problem where Y is qualitative? In that situation, there are a few possible approaches, but the simplest is as follows. For a given test observation, we can record the class predicted by each of the trees, and B take a majority vote : the overall prediction is the most commonly occurring majority class among the B predictions. vote Figure 8.8 shows the results from bagging trees on the data. The Heart test error rate is shown as a function of B , the number of trees constructed using bootstrapped training data sets. We see that the bagging test error rate is slightly lower in this case than the test error rate obtained from a single tree. The number of trees B is not a critical parameter with bagging; using a very large value of B will not lead to overfitting. In practice we use a value of sufficiently large that the error has settled down. Using B B = 100 is sufficient to achieve good performance in this example. Out-of-Bag Error Estimation It turns out that there is a very straightforward way to estimate the test error of a bagged model, without the need to perform cross-validation or the validation set approach. Recall that the key to bagging is that trees are repeatedly fit to bootstrapped subsets of the observations. One can show that on average, each bagged tree makes use of around two-thirds of the 3 The remaining one-third of the observations not used to fit a observations. given bagged tree are referred to as the out-of-bag (OOB) observations. We out-of-bag can predict the response for the i th observation using each of the trees in 3 This relates to Exercise 2 of Chapter 5.

333 318 8. Tree-Based Methods 0.30 0.25 0.20 Error 0.15 Test: Bagging Test: RandomForest OOB: Bagging OOB: RandomForest 0.10 50 300 250 200 150 0 100 Number of Trees FIGURE 8.8. Heart data. The test Bagging and random forest results for the B , the number of bootstrapped error (black and orange) is shown as a function of √ p . The dashed line m = training sets used. Random forests were applied with indicates the test error resulting from a single classification tree. The green and blue traces show the OOB error, which in this case is considerably lower. which that observation was OOB. This will yield around B/ 3 predictions i i th for the th observation. In order to obtain a single prediction for the observation, we can average these predicted responses (if regression is the goal) or can take a majority vote (if classification is the goal). This leads i th observation. An OOB prediction to a single OOB prediction for the can be obtained in this way for each of the n observations, from which the overall OOB MSE (for a regression problem) or classification error (for a classification problem) can be computed. The resulting OOB error is a valid estimate of the test error for the bagged model, since the response for each observation is predicted using only the trees that were not fit using that Heart data. It can observation. Figure 8.8 displays the OOB error on the be shown that with B sufficiently large, OOB error is virtually equivalent to leave-one-out cross-validation error. The OOB approach for estimating the test error is particularly convenient when performing bagging on large data sets for which cross-validation would be computationally onerous. Variable Importance Measures As we have discussed, bagging typically results in improved accuracy over prediction using a single tree. Unfortunately, however, it can be difficult to interpret the resulting model. Recall that one of the advantages of decision

334 8.2 Bagging, Random Forests, Boosting 319 Fbs RestECG ExAng Sex Slope Chol Age RestBP MaxHR Oldpeak ChestPain Ca Thal 0 20406080100 Variable Importance A variable importance plot for the FIGURE 8.9. data. Variable impor- Heart tance is computed using the mean decrease in Gini index, and expressed relative to the maximum. trees is the attractive and easily interpreted diagram that results, such as the one displayed in Figure 8.1. However, when we bag a large number of trees, it is no longer possible to represe nt the resulting statistical learning procedure using a single tree, and it is no longer clear which variables are most important to the procedure. Thus, bagging improves prediction accuracy at the expense of interpretability. Although the collection of bagged trees is much more difficult to interpret than a single tree, one can obtain an overall summary of the importance of each predictor using the RSS (for bagging regression trees) or the Gini index (for bagging classification trees). In the case of bagging regression trees, we can record the total amount that the RSS (8.1) is decreased due to splits over a given predictor, averaged over all B trees. A large value indicates an important predictor. Similarly, in the context of bagging classification trees, we can add up the total amount that the Gini index (8.6) is decreased B trees. by splits over a given predictor, averaged over all Heart data in the A graphical representation of the variable importances variable is shown in Figure 8.9. We see the mean d ecrease in Gini index for each vari- importance able, relative to the largest. The variables with the largest mean decrease ChestPain . , Ca ,and Thal in Gini index are

335 320 8. Tree-Based Methods 8.2.2 Random Forests Random forests provide an improvement over bagged trees by way of a random decorrelates the trees. As in bagging, we build a number small tweak that forest of decision trees on bootstrapped training samples. But when building these decision trees, each time a split in a tree is considered, a random sample of is chosen as split candidates from the full set of p predictors. m predictors predictors. A fresh sample of m The split is allowed to use only one of those √ p —that predictors is taken at each split, and typically we choose m m ≈ is, the number of predictors consider ed at each split is approximately equal to the square root of the total number of predictors (4 out of the 13 for the Heart data). In other words, in building a random forest, at each split in the tree, the algorithm is not even allowed to consider a majority of the available predictors. This may sound crazy, but it has a clever rationale. Suppose that there is one very strong predictor in the data set, along with a num- ber of other moderately strong predictors. Then in the collection of bagged trees, most or all of the trees will use this strong predictor in the top split. Consequently, all of the bagged trees will look quite similar to each other. Hence the predictions from the bagged trees will be highly correlated. Un- fortunately, averaging many highly correlated quantities does not lead to as large of a reduction in variance as averaging many uncorrelated quanti- ties. In particular, this means that bagging will not lead to a substantial reduction in variance over a single tree in this setting. Random forests over come this problem by forci ng each split to consider − m p /p of the only a subset of the predictors. Therefore, on average ( ) splits will not even consider the strong predictor, and so other predictors decorrelating will have more of a chance. We can think of this process as the trees, thereby making the average of the resulting trees less variable and hence more reliable. The main difference between bagging and random forests is the choice of predictor subset size m . For instance, if a random forest is built using Heart data, random p , then this amounts simply to bagging. On the m = √ p leads to a reduction in both test error and OOB error forests using = m over bagging (Figure 8.8). Using a small value of m in building a random forest will typically be helpful when we have a large number of correlated predictors. We applied random forests to a high-dimensional biological data set consisting of ex- pression measurements of 4,718 genes measured on tissue samples from 349 patients. There are around 20,000 genes in humans, and individual genes have different levels of activity, or expression, in particular cells, tissues, and biological conditions. In this data set, each of the patient samples has a qualitative label with 15 different levels: either normal or 1 of 14 different types of cancer. Our goal was to use random forests to predict cancer type based on the 500 genes that have the largest variance in the training set.

336 8.2 Bagging, Random Forests, Boosting 321 m=p m=p/2 0.5 p m= 0.4 0.3 Test Classification Error 0.2 300 500 100 400 200 0 Number of Trees FIGURE 8.10. Results from random forests for the 15-class gene expression data set with = 500 predictors. The test error is displayed as a function of p the number of trees. Each colored line corresponds to a different value of ,the m number of predictors available for splitting at each interior tree node. Random p ) lead to a slight improvement over bagging ( forests ( = m

337 322 8. Tree-Based Methods of the other trees. Boosting works in a s imilar way, except that the trees are grown : each tree is grown using information from previously sequentially grown trees. Boosting does not involve bootstrap sampling; instead each tree is fit on a modified version of the original data set. Algorithm 8.2 Boosting for Regression Trees ˆ ( x )=0and r 1. Set = y in the training set. for all f i i i =1 2 ,...,B , repeat: b 2. For , b ˆ with d splits ( d + 1 terminal nodes) to the training f (a) Fit a tree X,r ). data ( ˆ (b) Update f by adding in a shrunken version of the new tree: b ˆ ˆ ˆ x f ( x )+ λ ) f ← ( f x ) . (8.10) ( (c) Update the residuals, b ˆ r − λ r ← (8.11) ( x ) . f i i i 3. Output the boosted model, B ∑ b ˆ ˆ ) (8.12) λ f ( x . x )= f ( =1 b Consider first the regression setting. Like bagging, boosting involves com- 1 B ˆ ˆ f bining a large number of decision trees, f . Boosting is described ,..., in Algorithm 8.2. What is the idea behind this procedure? Unlike fitting a single large deci- sion tree to the data, which amounts to fitting the data hard and potentially learns slowly . Given the current overfitting, the boosting approach instead model, we fit a decision tree to the residuals from the model. That is, we Y ,asthere- fit a tree using the current residuals, rather than the outcome sponse. We then add this new decision tree into the fitted function in order to update the residuals. Each of these trees can be rather small, with just a few terminal nodes, dete rmined by the parameter d in the algorithm. By ˆ fitting small trees to the residuals, we slowly improve f in areas where it does not perform well. The shrinkage parameter λ slows the process down even further, allowing more and differ ent shaped trees to attack the resid- uals. In general, statistical learning approaches that learn slowly tend to perform well. Note that in boosting, unlike in bagging, the construction of each tree depends strongly on the trees that have already been grown.

338 8.2 Bagging, Random Forests, Boosting 323 Boosting: depth=1 0.25 Boosting: depth=2 RandomForest: m= p 0.20 0.15 Test Classification Error 0.10 0.05 1000 2000 4000 3000 5000 0 Number of Trees Results from performing boosting and random forests on the FIGURE 8.11. cancer versus normal 15-class gene expression data set in order to predict .The test error is displayed as a function of the number of trees. For the two boosted λ . 01 . Depth-1 trees slightly outperform depth-2 trees, and both out- models, =0 perform the random forest, although the standard errors are around 0.02, making none of these differences significant. The test error rate for a single tree is 24 %. We have just described the process of boosting regression trees. Boosting ilar but slightly mo classification trees proceeds in a sim re complex way, and the details are omitted here. Boosting has three tuning parameters: 1. The number of trees B . Unlike bagging and random forests, boosting can overfit if B is too large, although this overfitting tends to occur slowly if at all. We use cross-validation to select B . 2. The shrinkage parameter λ , a small positive number. This controls the . . 001, and rate at which boosting learns. Typical values are 0 01 or 0 the right choice can depend on the problem. Very small λ can require using a very large value of in order to achieve good performance. B 3. The number d of splits in each tree, which controls the complexity of the boosted ensemble. Often d = 1 works well, in which case each tree is a stump , consisting of a single split. In this case, the boosted stump ensemble is fitting an additive model, since each term involves only a single variable. More generally d is the interaction depth ,andcontrols interaction splits can involve the interaction order of the boosted model, since d depth at most d variables. In Figure 8.11, we applied boosting to the 15-class cancer gene expression data set, in order to develop a classifier that can distinguish the normal class from the 14 cancer classes. We display the test error as a function of the total number of trees and the interaction depth d .Weseethatsimple

339 324 8. Tree-Based Methods stumps with an interaction depth of one perform well if enough of them are included. This model outperforms the depth-two model, and both out- perform a random forest. This highlights one difference between boosting and random forests: in boosting, because the growth of a particular tree takes into account the other trees that have already been grown, smaller trees are typically sufficient. Using smaller trees can aid in interpretability as well; for instance, using stumps leads to an additive model. 8.3 Lab: Decision Trees 8.3.1 Fitting Classification Trees library is used to construct classification and regression trees. The tree > library(tree) Carseats data set. In these We first use classification trees to analyze the data, is a continuous variable, and so we begin by recoding it as a Sales binary variable. We use the ifelse() a variable, called function to create ifelse() High Yes if the Sales variable exceeds 8, and , which takes on a value of takes on a value of No otherwise. > library(ISLR) > attach(Carseats) > High=ifelse(Sales <=8,"No","Yes") Finally, we use the data.frame() function to merge High with the rest of the Carseats data. > Carseats=data.frame(Carseats,High) We now use the function to fit a classification tree in order to predict tree() tree() using all variables but Sales High tree() function is quite . The syntax of the similar to that of the lm() function. > tree.carseats=tree(High ∼ .-Sales, Carseats) The function lists the variables that are used as internal nodes summary() in the tree, the number of terminal nodes, and the (training) error rate. > summary(tree.carseats) Classification tree: tree(formula = High ∼ . - Sales, data = Carseats) Variables actually used in tree construction: [1] "ShelveLoc" "Price" "Income" "CompPrice" [5] "Population" "Advertising" "Age" "US" Number of terminal nodes: 27 Residual mean deviance: 0.4575 = 170.7 / 373 Misclassification error rate: 0.09 = 36 / 400

340 8.3 Lab: Decision Trees 325 We see that the training error rate is 9 %. For classification trees, the de- viance reported in the output of summary() is given by ∑ ∑ , p n log ˆ 2 − mk mk m k th terminal node that is the number of observations in the m where n mk belong to the k th class. A small deviance i ndicates a tree that provides a good fit to the (training) data. The reported is residual mean deviance simply the deviance divided by −| T n | − 27 = 373. , which in this case is 400 0 One of the most attractive properties of trees is that they can be plot() function to display the tree struc- graphically displayed. We use the ture, and the function to display the node labels. The argument text() instructs R pretty=0 to include the category names for any qualitative pre- dictors, rather than simply disp laying a letter for each category. > plot(tree.carseats) > text(tree.carseats,pretty=0) The most important indicator of Sales appears to be shelving location, since the first branch differentiates Good Bad and Medium locations from locations. If we just type the name of the tree object, prints output corresponding R R displays the split criterion (e.g. Price<92.5 ), the to each branch of the tree. number of observations in that branch, the deviance, the overall prediction for the branch ( Yes No ), and the fraction of observations in that branch or Yes No . Branches that lead to terminal nodes are and that take on values of indicated using asterisks. > tree.carseats node), split, n, deviance, yval, (yprob) * denotes terminal node 1) root 400 541.5 No ( 0.590 0.410 ) 2) ShelveLoc: Bad,Medium 315 390.6 No ( 0.689 0.311 ) 4) Price < 92.5 46 56.53 Yes ( 0.304 0.696 ) 8) Income < 57 10 12.22 No ( 0.700 0.300 ) In order to properly evaluate the performance of a classification tree on these data, we must estimate the test error rather than simply computing the training error. We split the observations into a training set and a test set, build the tree using the training set, and evaluate its performance on predict() function can be used for this purpose. In the the test data. The case of a classification tree, the argument type="class" instructs R to return the actual class prediction. This approach leads to correct predictions for around 71 . 5 % of the locations in the test data set. > set.seed(2) > train=sample(1:nrow(Carseats), 200) > Carseats.test=Carseats[-t rain ,] > High.test=High[-train]

341 326 8. Tree-Based Methods ∼ .-Sales, > tree.carseats=tree(High Carseats,subset=train) type="class") test, > tree.pred=predict(tree.carseats,Carseats. pred, High.test) > table(tree. High.test tree.pred No Yes No 86 27 Yes 30 57 > (86+57)/200 [1] 0.715 Next, we consider whether pruning the tree might lead to improved cv.tree() performs cross-validation in order to results. The function cv.tree() determine the optimal level of tree complexity; cost complexity pruning is used in order to select a sequence of trees for consideration. We use FUN=prune.misclass in order to indicate that we want the the argument classification error rate to guide the cross-validation and pruning process, cv.tree() function, which is deviance. The rather than the default for the cv.tree() function reports the number of terminal nodes of each tree con- sidered ( size ) as well as the corresponding error rate and the value of the k α in (8.4)). , which corresponds to cost-complexity parameter used ( > set.seed(3) > cv.carseats=cv.tree(tree.carseats,FUN=prune.misclass) > names(cv.carseats) [1] "size" "dev" "k" "method" > cv.carseats $size [1]1917141397321 $dev [1] 55 55 53 52 50 56 69 65 80 $k [1] -Inf 0.0000000 0.6666667 1.0000000 1.7500000 2.0000000 4.2500000 [8] 5.0000000 23.0000000 $method [1] "misclass" attr(,"class") [1] "prune" "tree.sequence" Note that, despite the name, dev corresponds to the cross-validation error rate in this instance. The tree with 9 terminal nodes results in the lowest cross-validation error rate, with 50 cross-validation errors. We plot the error size and k . rate as a function of both > par(mfrow=c(1,2)) > plot(cv.carseats$size ,cv.carseats$dev ,type="b") > plot(cv.carseats$k ,cv.carseats$dev ,type="b")

342 8.3 Lab: Decision Trees 327 prune.misclass() function in order to prune the tree to We now apply the prune. obtain the nine-node tree. misclass() > prune.carseats=prune.misclass(tree.carseats,best=9) > plot(prune.carseats) > text(prune.carseats,pretty=0) How well does this pruned tree perform on the test data set? Once again, we apply the function. predict() > tree.pred=predict(prune.carseats,Carseats. test, type="class") > table(tree. High.test) pred, High.test tree.pred No Yes No 94 24 Yes 22 60 > (94+60)/200 [1] 0.77 correctly classified, so not only has Now 77 % of the test observations are the pruning process produced a more interpretable tree, but it has also improved the classification accuracy. best , we obtain a larger pruned tree with lower If we increase the value of classification accuracy: > prune.carseats=prune.misclass(tree.carseats,best=15) > plot(prune.carseats) > text(prune.carseats,pretty=0) test, > tree.pred=predict(prune.carseats,Carseats. type="class") > table(tree. pred, High.test) High.test tree.pred No Yes No 86 22 Yes 30 62 > (86+62)/200 [1] 0.74 8.3.2 Fitting Regression Trees Here we fit a regression tree to the Boston data set. First, we create a training set, and fit the tree to the training data. > library(MASS) > set.seed(1) > train = sample(1:nrow(Boston), nrow(Boston)/2) > tree.boston=tree(medv ∼ .,Boston , subset=train) > summary(tree.boston) Regression tree: tree(formula = medv ∼ ., data = Boston , subset = train) Variables actually used in tree construction: [1] "lstat" "rm" "dis" Number of terminal nodes: 8

343 328 8. Tree-Based Methods Residual mean deviance: 12.65 = 3099 / 245 Distribution of residuals: Min. 1st Qu. Median Mean 3rd Qu. Max. -2.0420 -0.0536 -14.1000 0.0000 1.9600 12.6000 Notice that the output of summary() indicates that only three of the vari- ables have been used in constructing th e tree. In the context of a regression tree, the deviance is simply the sum o f squared errors for the tree. We now plot the tree. > plot(tree.boston) > text(tree. boston , pretty=0) lstat measures the percentage of individuals with lower The variable e indicates that lower values of socioeconomic status. The tre cor- lstat tree predicts a median house price respond to more expensive houses. The of $46 , 400 for larger homes in suburbs in which residents have high socioe- rm>=7.437 and ). lstat<9.715 conomic status ( cv.tree() function to see whether pruning the tree will Now we use the improve performance. > cv.boston=cv.tree(tree.boston) > plot(cv.boston$size ,cv.boston$dev ,type=’b’) In this case, the most complex tree i s selected by cross-validation. How- ever, if we wish to prune the tree, we could do so as follows, using the function: prune.tree() prune.tree() > prune.boston=prune.tree(tree.boston ,best=5) > plot(prune.boston) > text(prune. boston , pretty=0) In keeping with the cross-validation results, we use the unpruned tree to make predictions on the test set. train ,]) > yhat=predict(tree.boston ,newdata=Boston[- > boston.test=Boston[-train,"medv"] > plot(yhat,boston.test) > abline(0,1) > mean((yhat-boston.test)^2) [1] 25.05 In other words, the test set MSE associated with the regression tree is 25 . 05. The square root of the MSE is therefore around 5 . 005, indicating that this model leads to test predictions that are within around $5 005 of , the true median home value for the suburb. 8.3.3 Bagging and Random Forests Here we apply bagging and random forests to the Boston data, using the randomForest package in R . The exact results obtained in this section may depend on the version of R and the version of the randomForest package

344 8.3 Lab: Decision Trees 329 installed on your computer. Recall that bagging is simply a special case of a random forest with p . Therefore, the m = function can randomForest() random be used to perform both random forests and bagging. We perform bagging Forest() as follows: > library( randomForest) > set.seed(1) randomForest( ∼ .,data=Bo ston , subset= train, > bag.boston= medv mtry=13,importance=TRUE) > bag.boston Call: formula = medv randomForest( ., data = Boston , mtry = 13, ∼ importance = TRUE, subset = train) Type of random forest: regression Number of trees: 500 No. of variables tried at each split: 13 Mean of squared residuals: 10.77 % Var explained: 86.96 mtry=13 indicates that all 13 predic tors should be considered The argument for each split of the tree—in other words, that bagging should be done. How well does this bagged model perform on the test set? > yhat.bag = predict(bag. boston , newdata=Boston[- train ,]) > plot(yhat.bag, boston.test) > abline(0,1) > mean((yhat.bag-boston.test)^2) [1] 13.16 The test set MSE associated with the bagged regression tree is 13 . 16, almost half that obtained using an optimally-pruned single tree. We could change the number of trees grown by using the ntree argument: randomForest() randomForest( medv ∼ .,data=Bo ston , > bag.boston= train, subset= mtry=13,ntree=25) > yhat.bag = predict(bag. boston , newdata=Boston[- train ,]) > mean((yhat.bag-boston.test)^2) [1] 13.31 Growing a random forest proceeds in e xactly the same way, except that we use a smaller value of the mtry argument. By default, randomForest() p/ 3 variables when building a rando m forest of regression trees, and uses √ p variables when building a random forest of classification trees. Here we use mtry = 6 . > set.seed(1) train, > rf.boston= medv ∼ .,data=Boston , subset= randomForest( mtry=6,importance =TRUE) > yhat.rf = predict(rf.boston ,newdata=Boston[- train ,]) > mean((yhat.rf-boston.test)^2) [1] 11.31

345 330 8. Tree-Based Methods . 31; this indicates that random forests yielded an The test set MSE is 11 improvement over bagging in this case. Using the importance() function, we can view the importance of each importance() variable. > importance(rf.boston) %IncMSE IncNodePurity crim 12.384 1051.54 zn 2.103 50.31 indus 8.390 1017.64 chas 2.294 56.32 nox 12.791 1107.31 rm 30.754 5917.26 age 10.334 552.27 dis 14.641 1223.93 rad 3.583 84.30 tax 8.139 435.71 ptratio 11.274 817.33 black 8.097 367.00 lstat 30.962 7713.63 Two measures of variable importance are reported. The former is based upon the mean decrease of accuracy in pr edictions on the out of bag samples when a given variable is excluded from the model. The latter is a measure of the total decrease in node impurity that results from splits over that variable, averaged over all trees (this was plotted in Figure 8.9). In the case of regression trees, the node impurity is measured by the training RSS, and for classification trees by th e deviance. Plots of these importance varImpPlot() function. measures can be produced using the varImpPlot() > varImpPlot(rf.boston) The results indicate that across all o f the trees consider ed in the random forest, the wealth level of the community ( lstat rm ) ) and the house size ( are by far the two most important variables. 8.3.4 Boosting gbm package, and within it the gbm() function, to fit boosted Here we use the gbm() regression trees to the Boston data set. We run gbm() with the option distribution="gaussian" since this is a regression problem; if it were a bi- nary classification problem, we would use .The distribution="bernoulli" n.trees=5000 indicates that we want 5000 trees, and the option argument interaction.depth=4 limits the depth of each tree. > library(gbm) > set.seed(1) > boost.boston=gbm(medv ∼ .,data=Boston[train,], distribution= "gaussian",n.trees=5000,interaction .depth=4) The summary() function produces a relative influence plot and also outputs the relative influence statistics.

346 8.3 Lab: Decision Trees 331 > summary(boost.boston) var rel.inf 1 lstat 45.96 2 rm 31.22 3 dis 6.81 4 crim 4.07 5 nox 2.56 6 ptratio 2.27 7 black 1.80 8 age 1.64 9 tax 1.36 10 indus 1.27 11 chas 0.80 12 rad 0.20 13 zn 0.015 We see that lstat and rm are by far the most important variables. We can also produce for these two variables. These plots partial dependence plots partial illustrate the marginal effect of the selected variables on the response after dependence plot integrating out the other variables. In this case, as we might expect, median rm and decreasing with lstat . house prices are increasing with > par(mfrow=c(1,2)) > plot(boost. boston ,i="rm") > plot(boost. boston ,i=" lstat") We now use the boosted model to predict medv on the test set: > yhat.boost=predict(boost.boston ,newdata=Boston[-train,], n.trees=5000) boost- > mean((yhat. boston.test)^2) [1] 11.8 The test MSE obtained is 11 . 8; similar to the test MSE for random forests and superior to that for bagging. If we want to, we can perform boosting λ with a different value of the shrinkage parameter in (8.10). The default value is 0 . 001, but this is easily modified. Here we take λ =0 . 2. > boost.boston=gbm(medv ∼ .,data=Boston[train,], distribution= "gaussian",n.trees=5000,interaction .depth=4,shrinkage =0.2, verbose=F) > yhat.boost=predict(boost.boston ,newdata=Boston[-train,], n.trees=5000) > mean((yhat. boost- boston.test)^2) [1] 11.5 001. λ =0 . 2 leads to a slightly lower test MSE than λ =0 . In this case, using

347 332 8. Tree-Based Methods 8.4 Exercises Conceptual 1. Draw an example (of your own invention) of a partition of two- dimensional feature space that could result from recursive binary ntain at least six regions. Draw a splitting. Your example should co decision tree corresponding to this partition. Be sure to label all as- ,R ,thecutpoints ,... pects of your figures, including the regions R 2 1 t ,t ,... , and so forth. 2 1 Hint: Your result should look something like Figures 8.1 and 8.2. 2. It is mentioned in Section 8.2.3 that boosting using depth-one trees (or stumps )leadstoan additive model: that is, a model of the form p ∑ ( f . X ) ( )= f X j j j =1 Explain why this is the case. You can begin with (8.12) in Algorithm 8.2. 3. Consider the Gini index, classification error, and cross-entropy in a simple classification setting with two classes. Create a single plot - .The x that displays each of these quantities as a function of ˆ p 1 m p axis should display ˆ , ranging from 0 to 1, and the -axis should y 1 m display the value of the Gini index, classification error, and entropy. Hint: In a setting with two classes, ˆ p . You could make p =1 − ˆ 1 m m 2 R . this plot by hand, but it will be much easier to make in 4. This question relates to the plots in Figure 8.12. (a) Sketch the tree corresponding to the partition of the predictor space illustrated in the left-hand panel of Figure 8.12. The num- bers inside the boxes indicate the mean of within each region. Y (b) Create a diagram similar to the left-hand panel of Figure 8.12, using the tree illustrated in the right-hand panel of the same figure. You should divide up the predictor space into the correct regions, and indicate the mean for each region. 5. Suppose we produce ten bootstrapped samples from a data set then apply a classification tree containing red and green classes. We X , produce to each bootstrapped sample and, for a specific value of 10 estimates of (Class is Red | X ): P 0 . 1 , 0 . 15 , 0 . 2 , 0 . 2 , 0 . 55 , 0 . 6 , 0 . 6 , 0 . 65 , 0 . 7 , and 0 . 75 .

348 8.4 Exercises 333 X2 < 1 15 X 1 2 5 0 3 0 X2 < 2 X1 < 1 10 X1 < 0 1 0 2.49 X 1 − 1.06 0.21 − 1.80 0.63 Left FIGURE 8.12. : A partition of the predictor space corresponding to Exer- cise 4a. Right : A tree corresponding to Exercise 4b. There are two common ways to combine these results together into a single class prediction. One is the majority vote approach discussed in this chapter. The second approach is to classify based on the average probability. In this example, what is the final classification under each of these two approaches? 6. Provide a detailed explanation of the algorithm that is used to fit a regression tree. Applied 7. In the lab, we applied random forests to the Boston data using mtry=6 and ntree=25 . Create a plot displaying the test and using ntree=500 error resulting from random forests on this data set for a more com- prehensive range of values for and ntree . You can model your mtry plot after Figure 8.10. Describe the results obtained. Carseats data set af- 8. In the lab, a classification tree was applied to the Sales into a qualitative response variable. Now we will ter converting seek to predict using regression trees and related approaches, Sales treating the response as a quantitative variable. (a) Split the data set into a training set and a test set. (b) Fit a regression tree to the train ing set. Plot the tree, and inter- pret the results. What test error rate do you obtain? (c) Use cross-validation in order to determine the optimal level of tree complexity. Does pruning the tree improve the test error rate? (d) Use the bagging approach in order to analyze this data. What importance() function to test error rate do you obtain? Use the determine which variables are most important.

349 334 8. Tree-Based Methods (e) Use random forests to analyze this data. What test error rate do you obtain? Use the importance() function to determine which ,thenum- m variables are most important. Describe the effect of each split, on the error rate ber of variables considered at obtained. data set which is part of the ISLR OJ 9. This problem involves the package. (a) Create a training set containing a random sample of 800 obser- vations, and a test set containing the remaining observations. as the response Purchase (b) Fit a tree to the training data, with Buy as predictors. Use the and the other variables except for summary() function to produce summary statistics about the tree, and describe the results obtained. What is the training error rate? How many terminal nodes does the tree have? (c) Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal nodes, and interpret the information displayed. (d) Create a plot of the tree, and interpret the results. (e) Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate? function to the training set in order to cv.tree() (f) Apply the determine the optimal tree size. x -axis and cross-validated (g) Produce a plot with tree size on the y -axis. classification error rate on the (h) Which tree size corresponds to th e lowest cross-validated classi- fication error rate? (i) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead en create a pruned tree with five to selection of a pruned tree, th terminal nodes. (j) Compare the training error rates between the pruned and un- pruned trees. Which is higher? (k) Compare the test error rates between the pruned and unpruned trees. Which is higher? Salary in the Hitters data set. 10. We now use boosting to predict (a) Remove the observations for whom the salary information is unknown, and then log-transform the salaries.

350 8.4 Exercises 335 (b) Create a training set consisting of the first 200 observations, and a test set consisting of the remaining observations. (c) Perform boosting on the training set with 1,000 trees for a range . Produce a plot with of values of the shrinkage parameter λ different shrinkage values on the x -axis and the corresponding training set MSE on the y -axis. (d) Produce a plot with different shrinkage values on the -axis and x the corresponding test set MSE on the y -axis. (e) Compare the test MSE of boosting to the test MSE that results from applying two of the regression approaches seen in Chapters 3 and 6. (f) Which variables appear to be the most important predictors in the boosted model? (g) Now apply bagging to the training set. What is the test set MSE for this approach? Caravan data set. 11. This question uses the (a) Create a training set consisting of the first 1,000 observations, and a test set consisting of the remaining observations. Purchase as the (b) Fit a boosting model to the training set with response and the other variables as predictors. Use 1,000 trees, and a shrinkage value of 0 . 01. Which predictors appear to be the most important? (c) Use the boosting model to predict the response on the test data. Predict that a person will make a purchase if the estimated prob- ability of purchase is greater than 20 %. Form a confusion ma- trix. What fraction of the people predicted to make a purchase do in fact make one? How does this compare with the results obtained from applying KNN or logistic regression to this data set? 12. Apply boosting, bagging, and random forests to a data set of your choice. Be sure to fit the models on a training set and to evaluate their performance on a test set. How accurate are the results compared to simple methods like linear or logistic regression? Which of these approaches yields the best performance?

351

352 9 Support Vector Machines support vector machine In this chapter, we discuss the (SVM), an approach for classification that was developed in the computer science community in the 1990s and that has grown in popularity since then. SVMs have been shown to perform well in a variety of settings, and are often considered one of the best “out of the box” classifiers. The support vector machine is a generalization of a simple and intu- maximal margin classifier itive classifier called the , which we introduce in Section 9.1. Though it is elegant and simple, we will see that this classifier unfortunately cannot be applied to most data sets, since it requires that the classes be separable by a linear boundary. In Section 9.2, we introduce the support vector classifier , an extension of the maximal margin classifier thatcanbeappliedinabroaderrangeo f cases. Section 9.3 introduces the support vector machine , which is a further extension of the support vec- tor classifier in order to accommodate non-linear class boundaries. Support vector machines are intended for the binary classification setting in which discuss extensions of support vector there are two classes; in Section 9.4 we machines to the case of more than two classes. In Section 9.5 we discuss een support vector machines and other statistical the close connections betw methods such as logistic regression. People often loosely refer to the maximal margin classifier, the support vector classifier, and the support vector machine as “support vector machines”. To avoid confusion, we will carefully distinguish between these three notions in this chapter. G. James et al., An Introduction to Statistical Learning: with Applications in R , 337 9, Springer Texts in Statistics 103, DOI 10.1007/978-1-4614-7138-7 © Springer Science+Business Media New York 2013

353 338 9. Support Vector Machines 9.1 Maximal Margin Classifier In this section, we define a hyperplane and introduce the concept of an optimal separating hyperplane. 9.1.1 What Is a Hyperplane? p is a flat affine subspace of -dimensional space, a In a hyperplane hyperplane 1 − p For instance, in two dimensions, a hyperplane is a flat dimension 1. one-dimensional subspace—in other words, a line. In three dimensions, a 3 hyperplane is a flat two-dimensional subspace—that is, a plane. In p> dimensions, it can be hard to visualize a hyperplane, but the notion of a − 1)-dimensional flat subspace still applies. ( p The mathematical definition of a hyperplane is quite simple. In two di- mensions, a hyperplane is defined by the equation + β X (9.1) + β = 0 X β 2 0 1 1 2 for parameters β ,β . When we say that (9.1) “defines” the hyper- ,and β 2 0 1 T X plane, we mean that any X =( ) ,X for which (9.1) holds is a point 2 1 on the hyperplane. Note that (9.1) is simply the equation of a line, since indeed in two dimensions a hyperplane is a line. Equation 9.1 can be easily extended to the p -dimensional setting: = 0 + β (9.2) X X + β β X + + ... β 2 2 p 1 p 0 1 defines a p -dimensional hyperplane, again in the sense that if a point X = T ( X p -dimensional space (i.e. a vector of length ) satisfies ) ,X in p ,...,X 1 p 2 (9.2), then X lies on the hyperplane. Now, suppose that X does not satisfy (9.2); rather, (9.3) . + β 0 X > + β X X β + ... + β 2 p 1 1 0 2 p Then this tells us that X lies to one side of the hyperplane. On the other hand, if β (9.4) + β , X 0 + β < X X + ... + β p p 2 1 1 0 2 then X lies on the other side of the hyperplane. So we can think of the hyperplane as dividing p -dimensional space into two halves. One can easily determine on which side of the hyperplane a point lies by simply calculating the sign of the left hand side of (9.2). A hyperplane in two-dimensional space is shown in Figure 9.1. 1 The word affine indicates that the subspace need not pass through the origin.

354 9.1 Maximal Margin Classifier 339 1.5 1.0 0.5 2 0.0 X −0.5 −1.0 −1.5 1.0 1.5 −1.5 −0.5 −1.0 0.0 0.5 X 1 FIGURE 9.1. X 1+2 +3 X The hyperplane =0 isshown.Theblueregionis 1 2 +3 X > 0 , and the purple region is the set of the set of points for which X 1+2 1 2 . 0 < +3 X points for which 1+2 X 1 2 9.1.2 Classification Using a Separating Hyperplane n p data matrix X that consists of n Now suppose that we have a training × -dimensional space, observations in p ⎞ ⎞ ⎛ ⎛ x x 1 n 11 ⎟ ⎟ ⎜ ⎜ . . . . = = x ,...,x (9.5) , ⎠ ⎠ ⎝ ⎝ 1 n . . x x np 1 p ,...,y ∈ y and that these observations fall into two classes—that is, 1 n , 1 } where − 1 represents one class and 1 the other class. We also have a {− 1 ) ( T ∗ ∗ ∗ x ... x p -vector of observed features x test observation, a = .Our p 1 goal is to develop a classifier based on the training data that will correctly ts feature measurements. We have seen classify the test observation using i a number of approaches for this task, such as linear discriminant analysis and logistic regression in Chapter 4, and classification trees, bagging, and boosting in Chapter 8. We will now see a new approach that is based upon separating hyperplane . the concept of a separating Suppose that it is possible to construct a hyperplane that separates the hyperplane training observations perfectly according to their class labels. Examples of three such separating hyperplanes are shown in the left-hand panel of =1and Figure 9.2. We can label the observations from the blue class as y i

355 340 9. Support Vector Machines 2 2 X X −10123 −10123 −10123 −10123 X X 1 1 FIGURE 9.2. Left: There are two classes of observations, shown in blue and in purple, each of which has measurements on two variables. Three separating A separating hy- hyperplanes, out of many possible, are shown in black. Right: perplane is shown in black. The blue and purple grid indicates the decision rule made by a classifier based on this separating hyperplane: a test observation that falls in the blue portion of the grid will be assigned to the blue class, and a test observation that falls into the purple portion of the grid will be assigned to the purple class. those from the purple class as y − 1. Then a separating hyperplane has = i the property that (9.6) + β , x =1 y + β 0if x > x + ... + β β i i 2 1 i ip 1 0 p 2 and β + β (9.7) x . 1 + β − x = y + ... + β 0if x < i ip i i 2 1 2 1 0 p Equivalently, a separating hyperplane has the property that y β (9.8) ( β x + β x + ... + β x ) > 0 + 1 0 1 p 2 ip i i 2 i for all ,...,n . =1 i If a separating hyperplane exists, we can use it to construct a very natural classifier: a test observation is assigned a class depending on which side of the hyperplane it is located. The right-hand panel of Figure 9.2 shows ∗ an example of such a classifier. That is, we classify the test observation x ∗ ∗ ∗ ∗ ∗ )= β ) is positive, + β based on the sign of x f x + β ( x ( f + ... + β .If x x 2 1 0 p p 1 2 ∗ ) is negative, then x then we assign the test observation to class 1, and if f ( ∗ − 1. We can also make use of the magnitude of f ( x we assign it to class ). If ∗ ∗ x lies far from the hyperplane, ) is far from zero, then this means that f x ( ∗ and so we can be confident about our class assignment for x . On the other

356 9.1 Maximal Margin Classifier 341 ∗ ∗ f ) is close to zero, then x ( is located near the hyperplane, and so hand, if x ∗ x we are less certain about the class assignment for . Not surprisingly, and as we see in Figure 9.2, a classifier that is based on a separating hyperplane leads to a linear decision boundary. 9.1.3 The Maximal Margin Classifier In general, if our data can be perfectly separated using a hyperplane, then there will in fact exist an infinite number of such hyperplanes. This is because a given separating hyperplane can usually be shifted a tiny bit up or down, or rotated, without coming into contact with any of the observations. Three possible separating hyperplanes are shown in the left-hand panel of Figure 9.2. In order to construct a classifier based upon a separating hyperplane, we must have a reasonable way to decide which of the infinite possible separating hyperplanes to use. maximal margin hyperplane (also known as the A natural choice is the maximal optimal separating hyperplane ), which is the separating hyperplane that margin hyperplane is farthest from the training observations. That is, we can compute the optimal (perpendicular) distance from each training observation to a given separat- separating ing hyperplane; the smallest such distance is the minimal distance from the hyperplane observations to the hyperplane, and is known as the margin . The maximal margin margin hyperplane is the separating hyperplane for which the margin is largest—that is, it is the hyperplane that has the farthest minimum dis- tance to the training observations. We can then classify a test observation based on which side of the maximal margin hyperplane it lies. This is known as the maximal margin classifier . We hope that a classifier that has a large maximal margin on the training data will also have a large margin on the test data, margin classifier and hence will classify the test observa tions correctly. Although the maxi- mal margin classifier is often successful , it can also lead to overfitting when is large. p are the coefficients of the maximal margin hyperplane, ,...,β ,β If β 1 0 p ∗ then the maximal margin classifier classifies the test observation x based ∗ ∗ ∗ ∗ + + . + β x x β β β x )= + ... f ( on the sign of x p 1 0 2 2 1 p Figure 9.3 shows the maximal margin hyperplane on the data set of Figure 9.2. Comparing the right-hand panel of Figure 9.2 to Figure 9.3, we see that the maximal margin hyperplane shown in Figure 9.3 does in- deed result in a greater minimal dista nce between the observations and the separating hyperplane—that is, a larger margin. In a sense, the maximal margin hyperplane represents the mid-line of the widest “slab” that we can insert between the two classes. Examining Figure 9.3, we see that three training observations are equidis- tant from the maximal margin hyperplane and lie along the dashed lines indicating the width of the margin. These three observations are known as

357 342 9. Support Vector Machines 2 X −10123 −10123 X 1 There are two classes of observations, shown in blue and in pur- FIGURE 9.3. ple. The maximal margin hyperplane is shown as a solid line. The margin is the distance from the solid line to either of the dashed lines. The two blue points and the purple point that lie on the dashed lines are the support vectors, and the distance from those points to the margin is indicated by arrows. The purple and blue grid indicates the decision rule made by a classifier based on this separating hyperplane. , since they are vectors in support vectors -dimensional space (in Figure 9.3, p support p = 2) and they “support” the maximal margin hyperplane in the sense vector that if these points were moved slightly then the maximal margin hyper- plane would move as well. Interestingly, the maximal margin hyperplane depends directly on the support vectors, but not on the other observations: a movement to any of the other observations would not affect the separating hyperplane, provided that the observation’s movement does not cause it to cross the boundary set by the margin. The fact that the maximal margin hyperplane depends directly on only a small subset of the observations is an important property that will arise later in this chapter when we discuss the support vector classifier and support vector machines. 9.1.4 Construction of the Maximal Margin Classifier We now consider the task of constructing the maximal margin hyperplane p n training observations x based on a set of and associated ∈ R ,...,x n 1 . Briefly, the maximal margin hyperplane ,...,y } ∈{− 1 , 1 y class labels n 1 is the solution to the optimization problem

358 9.1 Maximal Margin Classifier 343 M (9.9) maximize β ,...,β ,β 0 p 1 p ∑ 2 subject to β =1 , (9.10) j =1 j ( β ,...,n. + β =1 x x i + β ∀ (9.11) M ≥ + ... + β ) x y 0 2 i 2 1 1 p i ip i This optimization problem (9.9)–(9.11) is actually simpler than it looks. First of all, the constraint in (9.11) that y ( β =1 + ,...,n i x ∀ M + β ≥ x ) x + ... + β β p ip i 2 1 i 1 0 i 2 guarantees that each observation will be on the correct side of the hyper- plane, provided that M is positive. (Actually, for each observation to be on y the correct side of the hyperplane we would simply need ( β β x + + 1 i 1 i 0 + ... + β x x ) > 0, so the constraint in (9.11) in fact requires that each β ip 2 i 2 p observation be on the correct side of the hyperplane, with some cushion, provided that M is positive.) Second, note that (9.10) is not really a constraint on the hyperplane, since + β = 0 defines a hyperplane, then so does x x + + β + x ... β if β 1 2 1 p i ip 2 0 i ( β k + β  x k ) = 0 for any + β = 0. However, (9.10) adds x x β + ... + 1 2 i ip 1 i p 0 2 meaning to (9.11); one can show that with this constraint the perpendicular distance from the i th observation to the hyperplane is given by . ) ( β x + β β x + ... + β + x y i 1 i 1 p 0 ip i 2 2 Therefore, the constraints (9.10) and (9.11) ensure that each observation is on the correct side of the hyperplane and at least a distance M from the hyperplane. Hence, M represents the margin of our hyperplane, and the .Thisisexactly M to maximize ,β ,...,β β optimization problem chooses 1 p 0 the definition of the maximal margin hyperplane! The problem (9.9)–(9.11) can be solved efficiently, but details of this optimization are outside of the scope of this book. 9.1.5 The Non-separable Case The maximal margin classifier is a very natural way to perform classifi- cation, if a separating hyperplane exists . However, as we have hinted, in many cases no separating hyperplane exists, and so there is no maximal margin classifier. In this case, the optimization problem (9.9)–(9.11) has no solution with M> 0. An example is shown in Figure 9.4. In this case, we cannot separate the two classes. However, as we will see in the next exactly section, we can extend the concept of a separating hyperplane in order to develop a hyperplane that almost separates the classes, using a so-called soft margin . The generalization of the maximal margin classifier to the non-separable case is known as the support vector classifier .

359 344 9. Support Vector Machines 2.0 1.5 1.0 2 X 0.5 0.0 −0.5 −1.0 0123 X 1 FIGURE 9.4. There are two classes of observations, shown in blue and in pur- ple. In this case, the two classes are not separable by a hyperplane, and so the maximal margin classifier cannot be used. 9.2 Support Vector Classifiers 9.2.1 Overview of the Support Vector Classifier In Figure 9.4, we see that observations that belong to two classes are not necessarily separable by a hyperplane . In fact, even if a separating hyper- plane does exist, then there are instances in which a classifier based on a separating hyperplane might not be desirable. A classifier based on a separating hyperplane will necessarily perfectly classify all of the training observations; this can lead to sensitivity to individual observations. An ex- ample is shown in Figure 9.5. The addition of a single observation in the right-hand panel of Figure 9.5 leads to a dramatic change in the maxi- mal margin hyperplane. The resulting maximal margin hyperplane is not satisfactory—for one thing, it has only a tiny margin. This is problematic because as discussed previously, the distance of an observation from the hyperplane can be seen as a measure of our confidence that the obser- vation was correctly classified. Moreover, the fact that the maximal mar- gin hyperplane is extremely sensitive to a change in a single observation suggests that it may have overfit the training data. In this case, we might be willing to consider a classifier based on a hy- perplane that does not perfectly separate the two classes, in the interest of

360 9.2 Support Vector Classifiers 345 2 2 X X −10123 −10123 −10123 3 2 1 0 −1 X X 1 1 Left: Two classes of observations are shown in blue and in FIGURE 9.5. Right: purple, along with the maximal margin hyperplane. An additional blue observation has been added, leading to a dramatic shift in the maximal margin hyperplane shown as a solid line. The dashed line indicates the maximal margin hyperplane that was obtained in the absence of this additional point. • Greater robustness to individual observations, and • Better classification of most of the training observations. That is, it could be worthwhile to misclassify a few training observations in order to do a better job in classifying the remaining observations. The , sometimes called a soft margin classifier , support vector classifier support does exactly this. Rather than seekin g the largest possible margin so that vector classifier every observation is not only on the correct side of the hyperplane but soft margin also on the correct side of the margin, we instead allow some observations classifier to be on the incorrect side of the margin, or even the incorrect side of the hyperplane. (The margin is soft because it can be violated by some of the training observations.) An example is shown in the left-hand panel of Figure 9.6. Most of the observations are on the correct side of the margin. However, a small subset of the observations are on the wrong side of the margin. An observation can be not only on the wrong side of the margin, but also on the wrong side of the hyperplane. In fact, when there is no separating hyperplane, such a situation is inevitable. Observations on the wrong side of the hyperplane correspond to training observations that are misclassified by the support vector classifier. The right-hand panel of Figure 9.6 illustrates such a scenario. 9.2.2 Details of the Support Vector Classifier The support vector classifier classifies a test observation depending on which side of a hyperplane it lies. The hyperplane is chosen to correctly

361 346 9. Support Vector Machines 10 10 7 7 11 9 9 8 8 2 2 X X 1 1 12 3 3 5 5 4 4 2 2 6 6 −101234 −101234 0.0 0.5 2.0 2.5 −0.5 1.0 1.5 1.5 −0.5 0.0 2.5 2.0 0.5 1.0 X X 1 1 FIGURE 9.6. Left: A support vector classifier was fit to a small data set. The hyperplane is shown as a solid line and the margins are shown as dashed lines. Observations 3 , 4 , 5 ,and 6 are on the correct side of the Purple observations: 2 margin, observation is on the margin, and observation 1 is on the wrong side of Observations and 10 are on the correct side of the margin. 7 Blue observations: is on the margin, and observation 8 the margin, observation 9 is on the wrong side Right: of the margin. No observations are on the wrong side of the hyperplane. Same as left panel with two additional points, 11 and 12 . These two observations are on the wrong side of the hyperplane and the wrong side of the margin. separate most of the training observations into the two classes, but may misclassify a few observations. It is the solution to the optimization problem M (9.12) maximize β ,β ,...,β , ,..., 0 n p 1 1 p ∑ 2 subject to , β (9.13) =1 j =1 j β ≥ + β x + β x + ... + β x , ) ( M (1 −  ) (9.14) y i 2 i 1 0 1 i p 2 i ip n ∑   , ≥ (9.15) C, 0 ≤ i i =1 i where C is a nonnegative tuning parameter. As in (9.11), M is the width of the margin; we seek to make this quantity as large as possible. In (9.14),  slack variables that allow individual observations to be on ,..., are n 1 slack the wrong side of the margin or the hyperplane; we will explain them in variable greater detail momentarily. Once we have solved (9.12)–(9.15), we classify ∗ as before, by simply determining on which side of the a test observation x hyperplane it lies. That is, we classify the test observation based on the ∗ ∗ ∗ sign of x ( f + β x )= . β ... + β x + p 1 0 p 1 The problem (9.12)–(9.15) seems complex, but insight into its behavior can be made through a series of simple observations presented below. First th observation is located, i tells us where the  of all, the slack variable i relative to the hyperplane and relative to the margin. If  = 0 then the i th i

362 9.2 Support Vector Classifiers 347 observation is on the correct side of the margin, as we saw in Section 9.1.4.  If 0 then the i th observation is on the wrong side of the margin, and > i violated the margin. If  i we say that the th observation has > 1thenit i is on the wrong side of the hyperplane. . In (9.14), C bounds We now consider the role of the tuning parameter C ’s, and so it determines the number and severity of the vio-  the sum of the i lations to the margin (and to the hyperplane) that we will tolerate. We can C as a budget for the amount that the margin can be violated think of n by the = 0 then there is no budget for violations to observations. If C = =  =0,inwhichcase ...  the margin, and it must be the case that 1 n (9.12)–(9.15) simply amounts to the maximal margin hyperplane optimiza- tion problem (9.9)–(9.11). (Of course, a maximal margin hyperplane exists only if the two classes are separable.) For 0nomorethan C observa- C> tions can be on the wrong side of the hyperplane, because if an observation > 1, and (9.14) requires  is on the wrong side of the hyperplane then i ∑ n that C  ≤ C . As the budget increases, we become more tolerant of i i =1 violations to the margin, and so the margin will widen. Conversely, as C decreases, we become less tolerant of violations to the margin and so the margin narrows. An example in shown in Figure 9.7. In practice, is treated as a tuning parameter that is generally chosen via C cross-validation. As with the tuning parameters that we have seen through- out this book, C controls the bias-variance trade-off of the statistical learn- ing technique. When C is small, we seek narrow margins that are rarely violated; this amounts to a classifier that is highly fit to the data, which may have low bias but high variance. On the other hand, when C is larger, the margin is wider and we allow more violations to it; this amounts to fitting the data less hard and obtaining a classifier that is potentially more biased but may have lower variance. The optimization problem (9.12)–(9.15) has a very interesting property: it turns out that only observations that either lie on the margin or that violate the margin will affect the hyp erplane, and hence the classifier ob- tained. In other words, an observation that lies strictly on the correct side of the margin does not affect the support vector classifier! Changing the position of that observation would not change the classifier at all, provided that its position remains on the correct side of the margin. Observations that lie directly on the margin, or on the wrong side of the margin for their class, are known as support vectors . These observations do affect the support vector classifier. The fact that only support vectors affect the classifier is in line with our previous assertion that C controls the bias-variance trade-off of the support vector classifier. When the tuning parameter C is large, then the margin is wide, many observations violate the margin, and so there are many support vectors. In this case, many observations are involved in determining the hyperplane. The top left panel in Figure 9.7 illustrates this setting: this classifier has low variance (since many observations are support vectors)

363 348 9. Support Vector Machines 2 2 X X −3−2−10123 −3−2−10123 1 2 −1 0 −1 2 1 0 X X 1 1 3 3 2 2 1 1 2 2 0 0 X X −1 −1 −2 −2 −3 −3 1 2 −1 −1 0 0 1 2 X X 1 1 A support vector classifier was fit using four different values of the FIGURE 9.7. C in (9.12)–(9.15). The largest value of C tuning parameter was used in the top left panel, and smaller values were used in the top right, bottom left, and bottom right panels. When C is large, then there is a high tolerance for observations being on the wrong side of the margin, and so the margin will be large. As C decreases, the tolerance for observations being on the wrong side of the margin decreases, and the margin narrows. but potentially high bias. In contrast, if C is small, then there will be fewer support vectors and hence the resulting classifier will have low bias but high variance. The bottom right panel in Figure 9.7 illustrates this setting, with only eight support vectors. The fact that the support vector classifier’s decision rule is based only on a potentially small subset of the training observations (the support vec- tors) means that it is quite robust to the behavior of observations that are far away from the hyperplane. This property is distinct from some of the other classification methods th at we have seen in preceding chapters, such as linear discriminant analysis. Recall that the LDA classification rule

364 9.3 Support Vector Machines 349 4 4 2 2 2 2 X X 0 0 −2 −2 −4 −4 0 4 −2 2 −4 −2 0 2 −4 4 X X 1 1 FIGURE 9.8. Left: The observations fall into two classes, with a non-linear boundary between them. Right: The support vector classifier seeks a linear bound- ary, and consequently performs very poorly. depends on the mean of of the observations within each class, as well as all all of the observations. the within-class covariance matrix computed using In contrast, logistic regression, unlike LDA, has very low sensitivity to ob- servations far from the decision boundary. In fact we will see in Section 9.5 that the support vector classifier and logistic regression are closely related. 9.3 Support Vector Machines We first discuss a general mechanism for converting a linear classifier into one that produces non-linear decision boundaries. We then introduce the support vector machine, which does this in an automatic way. 9.3.1 Classification with Non-linear Decision Boundaries The support vector classifier is a natural approach for classification in the two-class setting, if the boundary between the two classes is linear. How- ever, in practice we are sometimes faced with non-linear class boundaries. For instance, consider the data in the left-hand panel of Figure 9.8. It is clear that a support vector classifie r or any linear classifier will perform poorly here. Indeed, the support vector classifier shown in the right-hand panel of Figure 9.8 is useless here. In Chapter 7, we are faced with an analogous situation. We see there that the performance of linear regre ssion can suffer when there is a non- linear relationship between the predi ctors and the outcome. In that case, we consider enlarging the feature space using functions of the predictors,

365 350 9. Support Vector Machines such as quadratic and cubic terms, in order to address this non-linearity. In the case of the support vector classifier, we could address the prob- lem of possibly non-linear boundaries between classes in a similar way, by enlarging the feature space using quadratic, cubic, and even higher-order polynomial functions of the predictors. For instance, rather than fitting a support vector classifier using p features ,X ,...,X , X 2 p 1 features p we could instead fit a support vector classifier using 2 2 2 2 X ,X ,X ,X ,X . ,...,X p 2 1 p 1 2 Then (9.12)–(9.15) would become maximize (9.16) M β ,β ,β ...,β ,..., , ,β 11 p 1 1 p n 12 2 0 ⎞ ⎛ p p ∑ ∑ 2 ⎝ ⎠ subject to y  − (1 M ≥ β , ) x β + + x β 2 j 0 i i ij 1 j ij =1 =1 j j p n 2 ∑ ∑ ∑ 2 C,  . ≥ 0 , =1  β ≤ i i jk =1 i j =1 =1 k Why does this lead to a non-linear decision boundary? In the enlarged feature space, the decision boundary that results from (9.16) is in fact lin- ear. But in the original feature space, the decision boundary is of the form ( x q q is a quadratic polynomial, and its solutions are gener- )=0,where ally non-linear. One might additionally want to enlarge the feature space with higher-order polynomial terms, or with interaction terms of the form ′ ′ X j . Alternatively, other functions of the predictors could for j  = X j j als. It is not hard to see that there be considered rather than polynomi are many possible ways to enlarge the feature space, and that unless we are careful, we could end up with a huge number of features. Then compu- tations would become unmanageable. The support vector machine, which we present next, allows us to enlarge the feature space used by the support vector classifier in a way that l eads to efficient computations. 9.3.2 The Support Vector Machine The support vector machine (SVM) is an extension of the support vector support classifier that results from enlarging the feature space in a specific way, vector machine using kernels . We will now discuss this extension, the details of which are kernel somewhat complex and beyond the sco pe of this book. However, the main idea is described in Section 9.3.1: we may want to enlarge our feature space

366 9.3 Support Vector Machines 351 in order to accommodate a non-linear boundary between the classes. The kernel approach that we describe here is simply an efficient computational approach for enacting this idea. We have not discussed exactly how the support vector classifier is com- puted because the details become somewhat technical. However, it turns out that the solution to the support vector classifier problem (9.12)–(9.15) involves only the inner products of the observations (as opposed to the a and b is observations themselves). The inner product of two r -vectors ∑ r a . Thus the inner product of two observations b = 〉 a, b 〈 defined as i i i =1 ′ x , is given by x i i p ∑ ′ ′ 〉 ,x (9.17) x . x = x 〈 ij i i i j =1 j It can be shown that • The linear support vector classifier can be represented as n ∑ f ( x )= β + (9.18) , α 〉 〈 x, x i 0 i =1 i where there are n parameters α ,i =1 ,...,n , one per training i observation. • To estimate the parameters α ,...,α and β , all we need are the 0 n 1 ) ( n ′ inner products 〈 x between all pairs of training observations. 〉 ,x i i 2 ) ( n 1) means n ( n − / 2, and gives the number of pairs (The notation 2 n among a set of items.) Notice that in (9.18), in order to evaluate the function f ( x ), we need to x compute the inner product between the new point and each of the training . However, it turns out that α is nonzero only for the support points x i i vectors in the solution—that is, if a training observation is not a support is the collection of indices of these S equals zero. So if α vector, then its i support points, we can rewrite any solution function of the form (9.18) as ∑ f x )= β ( α , (9.19) + 〉 〈 x, x i i 0 i ∈S 2 which typically involves far fewer terms than in (9.18). To summarize, in representi ng the linear classifier f ( x ), and in computing its coefficients, all we need are inner products. Now suppose that every time the inner product (9.17) appears in the representation (9.18), or in a calculation of the solution for the support 2 By expanding each of the inner products in (9.19), it is easy to see that f ( x )is a linear function of the coordinates of x . Doing so also establishes the correspondence and the original parameters β . between the α i j

367 352 9. Support Vector Machines of the inner product of vector classifier, we replace it with a generalization the form ′ ,x (9.20) ) , K x ( i i is some function that we will refer to as a kernel where K . A kernel is a kernel function that quantifies the similarity of two observations. For instance, we could simply take p ∑ ′ ′ K ( x ,x (9.21) , x )= x ij i j i i =1 j which would just give us back the support vector classifier. Equation 9.21 linear kernel because the support vector classifier is linear is known as a in the features; the linear kernel essentially quantifies the similarity of a pair of observations using Pearson (standard) correlation. But one could instead choose another form for (9.20). For instance, one could replace ∑ p ′ x x with the quantity every instance of ij i j j =1 p ∑ d ′ ′ ( x K ,x . (9.22) ) x )=(1+ x i ij j i i =1 j This is known as a d ,where d is a positive of degree polynomial kernel polynomial integer. Using such a kernel with d> 1, instead of the standard linear kernel kernel (9.21), in the support vector classifier algorithm leads to a much more flexible decision boundary. It essentially amounts to fitting a support vector d , classifier in a higher-dimensional space involving polynomials of degree ace. When the support vector classifier rather than in the original feature sp is combined with a non-linear kernel such as (9.22), the resulting classifier is known as a support vector machine. Note that in this case the (non-linear) function has the form ∑ ( K α ) x, x (9.23) . + β )= x ( f i 0 i ∈S i The left-hand panel of Figure 9.9 shows an example of an SVM with a polynomial kernel applied to the non-linear data from Figure 9.8. The fit is linear support vector classifier. When a substantial improvement over the d = 1, then the SVM reduces to the suppor t vector classifier seen earlier in this chapter. The polynomial kernel shown in (9.22) is one example of a possible non-linear kernel, but alternatives abound. Another popular choice is the radial kernel , which takes the form radial kernel p ∑ 2 ′ ′ ( (9.24) − γ x . ) )=exp( x ,x − x ( ) K i j i i ij =1 j

368 9.3 Support Vector Machines 353 4 4 2 2 2 2 X X 0 0 −2 −2 −4 −4 −2 2 4 −4 0 4 −2 2 0 −4 X X 1 1 FIGURE 9.9. Left: An SVM with a polynomial kernel of degree 3 is applied to the non-linear data from Figure 9.8, resulting in a far more appropriate decision Right: An SVM with a radial kernel is applied. In this example, either kernel rule. is capable of capturing the decision boundary. In (9.24), is a positive constant. The right-hand panel of Figure 9.9 shows γ an example of an SVM with a radial kernel on this non-linear data; it also does a good job in separating the two classes. How does the radial kernel (9.24) actually work? If a given test obser- T ∗ ∗ ∗ x ...x =( ) is far from a training observation x in terms of vation x i 1 p ∑ p ∗ ∗ 2 Euclidean distance, then x x )= − x ,x ) ( will be large, and so K ( ij i j j =1 ∑ p ∗ 2 γ − exp( x x ) will be very tiny. This means that in (9.23), − x ( ) ij i j =1 j ∗ ( x will play virtually no role in ). Recall that the predicted class label f ∗ ∗ ). In other words, is based on the sign of f ( x x for the test observation ∗ x training observations that are far from will play essentially no role in ∗ . This means that the radial kernel has very x the predicted class label for local behavior, in the sense that only nearby training observations have an effect on the class label of a test observation. What is the advantage of using a kernel rather than simply enlarging the feature space using functions of the original features, as in (9.16)? One advantage is computational, and it amounts to the fact that using kernels, ) ( n ′ ′ i, i distinct pairs ) for all .Thiscanbe ,x x ( K one need only compute i i 2 done without explicitly working in the enlarged feature space. This is im- portant because in many applications of SVMs, the enlarged feature space is so large that computations are intractable. For some kernels, such as the radial kernel (9.24), the feature space is implicit and infinite-dimensional, so we could never do the computations there anyway!

369 354 9. Support Vector Machines 1.0 1.0 0.8 0.8 0.6 0.6 0.4 0.4 True positive rate True positive rate Support Vector Classifier 0.2 0.2 3 − γ =10 SVM: − 2 Support Vector Classifier SVM: =10 γ 1 − LDA SVM: =10 γ 0.0 0.0 0.8 0.2 0.0 1.0 0.6 0.4 0.8 0.2 0.0 0.6 0.4 1.0 False positive rate False positive rate The support data training set. Left: ROC curves for the FIGURE 9.10. Heart Right: The support vector classifier is vector classifier and LDA are compared. − − 1 3 2 − 10 ,and 10 . , compared to an SVM using a radial basis kernel with =10 γ 9.3.3 An Application to the Heart Disease Data In Chapter 8 we apply decision trees and related methods to the Heart Age , Sex ,and data. The aim is to use 13 predictors such as in order to Chol predict whether an individual has heart disease. We now investigate how an SVM compares to LDA on this data. The data consist of 297 subjects, which we randomly split into 207 training and 90 test observations. We first fit LDA and the support vector classifier to the training data. Note that the support vector classifier is equivalent to a SVM using a poly- nomial kernel of degree d = 1. The left-hand panel of Figure 9.10 displays ROC curves (described in Section 4.4.3) for the training set predictions for both LDA and the support vector classifier. Both classifiers compute scores ˆ ˆ ˆ ˆ ˆ β + β X X β + for each observation. X + ... + ( X )= β f of the form 2 2 p 1 p 0 1 t , we classify observations into the heart disease or For any given cutoff ˆ ˆ ) categories depending on whether X )

370 9.4 SVMs with More than Two Classes 355 1.0 1.0 0.8 0.8 0.6 0.6 0.4 0.4 True positive rate True positive rate Support Vector Classifier 0.2 0.2 3 − =10 γ SVM: − 2 Support Vector Classifier γ SVM: =10 − 1 LDA γ =10 SVM: 0.0 0.0 1.0 0.2 0.0 0.6 0.4 0.8 0.6 0.8 1.0 0.2 0.0 0.4 False positive rate False positive rate ROC curves for the test set of the Heart data. FIGURE 9.11. The support Left: vector classifier and LDA are compared. Right: The support vector classifier is 1 − 2 − 3 − 10 , ,and 10 . compared to an SVM using a radial basis kernel with γ =10 left-hand panel of Figure 9.11, the support vector classifier appears to have a small advantage over LDA (although these differences are not statisti- − 1 cally significant). In the right-hand panel, the SVM using γ =10 ,which showed the best results on the training data, produces the worst estimates on the test data. This is once again evidence that while a more flexible method will often produce lower training error rates, this does not neces- − 2 γ sarily lead to improved performance on test data. The SVMs with =10 − 3 and =10 perform comparably to the support vector classifier, and all γ 1 − . =10 γ three outperform the SVM with 9.4 SVMs with More than Two Classes So far, our discussion has been limited to the case of binary classification: that is, classification in the two-class setting. How can we extend SVMs to the more general case where we hav e some arbitrary number of classes? It turns out that the concept of separating hyperplanes upon which SVMs are based does not lend itself naturally to more than two classes. Though a number of proposals for extending SVMs to the K -class case have been made, the two most popular are the one-versus-one and one-versus-all approaches. We briefly discuss those two approaches here. 9.4.1 One-Versus-One Classification Suppose that we would like to perform classification using SVMs, and there ) ( K one-versus-one or all-pairs approach constructs are K> 2 classes. A one-versus- 2 one SVMs, each of which compares a pair of classes. For example, one such

371 356 9. Support Vector Machines ′ th class, coded as +1, to the th class, coded k k SVM might compare the ) ( K classifiers, and 1. We classify a test observation using each of the − as 2 we tally the number of times that the test observation is assigned to each of the s performed by assigning the test K classes. The final classification i observation to the class to which it was most frequently assigned in these ( ) K pairwise classifications. 2 9.4.2 One-Versus-All Classification one-versus-all approach is an alternative procedure for applying SVMs The one-versus- all K> 2 classes. We fit K SVMs, each time comparing one of in the case of ,β denote ,...,β 1 classes. Let β classes to the remaining K K − the k 1 pk k 0 k th class the parameters that result from fitting an SVM comparing the ∗ denote a test observation. − 1). Let x (coded as +1) to the others (coded as ∗ ∗ We assign the observation to the class for which β + β + + x ... x + β 2 k k 0 1 k 2 1 ∗ β x is largest, as this amounts to a high level of confidence that the test pk p observation belongs to the k th class rather than to any of the other classes. 9.5 Relationship to Logistic Regression When SVMs were first introduced in the mid-1990s, they made quite a splash in the statistical and machine learning communities. This was due ood marketing, and also to the fact in part to their good performance, g that the underlying approach seemed both novel and mysterious. The idea of finding a hyperplane that separates the data as well as possible, while al- lowing some violations to this separation, seemed distinctly different from classical approaches for classification, such as logistic regression and lin- ear discriminant analysis. Moreover, the idea of using a kernel to expand the feature space in order to accommodate non-linear class boundaries ap- peared to be a unique and valuable characteristic. However, since that time, deep connections between SVMs and other more classical statistical methods have emerged. It turns out that one can rewrite the criterion (9.12)–(9.15) for fitting the support vector classifier as X β + ... + + β X β )= X ( f 1 1 p p 0 ⎫ ⎧ p n ⎨ ⎬ ∑ ∑ 2 minimize max [0 β )] + x λ , 1 − y ( f (9.25) , i i j ,β ,...,β β ⎩ ⎭ p 1 0 =1 j i =1

372 9.5 Relationship to Logistic Regression 357 is a nonnegative tuning parameter. When where β λ ,...,β λ is large then 1 p are small, more violations to the margin are tolerated, and a low-variance but high-bias classifier will result. When is small then few violations λ to the margin will occur; this amounts to a high-variance but low-bias C classifier. Thus, a small value of λ in (9.25) amounts to a small value of ∑ p 2 β term in (9.25) is the ridge penalty term λ in (9.15). Note that the j j =1 from Section 6.2.1, and plays a similar role in controlling the bias-variance trade-off for the support vector classifier. Now (9.25) takes the “Loss + Penalty” form that we have seen repeatedly throughout this book: ,β , (9.26) . } { L ( X ) y β )+ λP ( minimize ,β ,...,β β p 1 0 L ( X , y ,β ) is some loss function quantifying the extent to which In (9.26), β the model, parametrized by , y ), and P ( β ) is a penalty , fits the data ( X whose effect is controlled by a nonneg- function on the parameter vector β . For instance, ridge regression and the lasso both λ ative tuning parameter take this form with ⎞ ⎛ 2 p n ∑ ∑ ⎠ ⎝ − β − x β , ,β y X L ( )= y ij j 0 i i =1 j =1 ∑ ∑ p p 2 and with P for | β ( β for ridge regression and P ( β )= β )= | j j =1 =1 j j the lasso. In the case of (9.25) the loss function instead takes the form n ∑ + x β + ( . ... + max [0 , 1 − y x )] β β ( L X , y ,β )= i p 1 0 i ip 1 =1 i This is known as hinge loss , and is depicted in Figure 9.12. However, it hinge loss turns out that the hinge loss function is closely related to the loss function used in logistic regression, also shown in Figure 9.12. An interesting characteristic of the support vector classifier is that only support vectors play a role in the classifier obtained; observations on the correct side of the margin do not affect it. This is due to the fact that the loss function shown in Figure 9.12 is exactly zero for observations for which ( β + β 1; these correspond to observations that are + ... + β x ≥ ) x y 0 p i 1 ip i 1 3 In contrast, the loss function for logistic on the correct side of the margin. regression shown in Figure 9.12 is not exactly zero anywhere. But it is very small for observations that are far from the decision boundary. Due to the similarities between their loss functions, logistic regression and the support vector classifier often give very sim ilar results. When the classes are well separated, SVMs tend to behave better than logistic regression; in more overlapping regimes, logistic regression is often preferred. 3 With this hinge-loss + penalty representation, the margin corresponds to the value ∑ 2 . one, and the width of the margin is determined by β j

373 358 9. Support Vector Machines SVM Loss Logistic Regression Loss Loss 02468 −4 0 2 −6 −2 ) ( β x + β y x β + + ... 1 i p 0 ip i 1 FIGURE 9.12. The SVM and logistic regression loss functions are compared, is ) x ( β β + β + x ... + + ... + β x x β ) .When y + ( β y as a function of 0 p i 1 1 i 1 0 p i ip i 1 ip greater than 1, then the SVM loss is zero, since this corresponds to an observation that is on the correct side of the margin. Overall, the two loss functions have quite similar behavior. nd SVM were first introduced, it was When the support vector classifier a C thought that the tuning parameter in (9.15) was an unimportant “nui- sance” parameter that could be set to s ome default value, like 1. However, the “Loss + Penalty” formulation (9.25) for the support vector classifier indicates that this is not the case. The choice of tuning parameter is very important and determines the extent to which the model underfits or over- fits the data, as illustrated, for example, in Figure 9.7. We have established that the support vector classifier is closely related to logistic regression and other preexisting statistical methods. Is the SVM unique in its use of kernels to enlarge the feature space to accommodate non-linear class boundaries? The answer to this question is “no”. We could just as well perform logistic regression or many of the other classification methods seen in this book using non-linear kernels; this is closely related to some of the non-linear approaches seen in Chapter 7. However, for his- torical reasons, the use of non-linear kernels is much more widespread in the context of SVMs than in the context of logistic regression or other methods. Though we have not addressed it her e, there is in fact an extension of the SVM for regression (i.e. for a quantitative rather than a qualita- tive response), called support vector regression . In Chapter 3, we saw that support such that the sum least squares regression seeks coefficients β ,...,β ,β vector 1 p 0 regression of squared residuals is as small as possible. (Recall from Chapter 3 that .) Support vector − ··· − β x − β β x − y residuals are defined as 1 i 1 p 0 ip i regression instead seeks coefficients that minimize a different type of loss, where only residuals larger in absolute value than some positive constant

374 9.6 Lab: Support Vector Machines 359 contribute to the loss function. This is an extension of the margin used in support vector classifiers to the regression setting. 9.6 Lab: Support Vector Machines e1071 library in R to demonstrate the support vector classifier We use the and the SVM. Another option is the LiblineaR library, which is useful for very large linear problems. 9.6.1 Support Vector Classifier The e1071 library contains implementations for a number of statistical learning methods. In particular, the function can be used to fit a svm() svm() support vector classifier when the argument kernel="linear" is used. This function uses a slightly different formulation from (9.14) and (9.25) for the cost argument allows us to specify the cost of support vector classifier. A a violation to the margin. When the cost argument is small, then the mar- gins will be wide and many support vectors will be on the margin or will cost argument is large, then the margins will violate the margin. When the be narrow and there will be few support vectors on the margin or violating the margin. svm() function to fit the support vector classifier for a We now use the cost parameter. Here we demonstrate the use of this given value of the function on a two-dimensional example so that we can plot the resulting decision boundary. We begin by generating the observations, which belong to two classes. > set.seed(1) > x=matrix(rnorm(20*2), ncol=2) > y=c(rep(-1,10), rep(1,10)) > x[y==1,]=x[y==1,] + 1 We begin by checking whether the classes are linearly separable. > plot(x, col=(3-y)) They are not. Next, we fit the support vector classifier. Note that in order for the svm() function to perform classification (as opposed to SVM-based regression), we must encode the res ponse as a factor variable. We now create a data frame with the response coded as a factor. > dat=data.frame(x=x, y=as.factor(y)) > library(e1071) > svmfit=svm(y ∼ ., data=dat, kernel="linear", cost=10, scale=FALSE)

375 360 9. Support Vector Machines scale=FALSE tells the function not to scale each feature The argument svm() to have mean zero or standard deviation one; depending on the application, one might prefer to use scale=TRUE . We can now plot the support vector classifier obtained: > plot(svmfit , dat) function are the output Note that the two arguments to the plot.svm() of the call to , as well as the data used in the call to svm() .The svm() region of feature space that will be assigned to the 1 class is shown in − light blue, and the region that will be assigned to the +1 class is shown in purple. The decision boundary between the two classes is linear (because we ), though due to the way in which the kernel="linear" used the argument plotting function is implemented in this library the decision boundary looks somewhat jagged in the plot. We see that in this case only one observation is misclassified. (Note that here the second feature is plotted on the x-axis and the first feature is plotted on the y-axis, in contrast to the behavior of plot() function in R .) The support vectors are plotted as crosses the usual and the remaining observations are plo tted as circles; we see here that there are seven support vectors. We can deter mine their identities as follows: > svmfit$index [1] 1 2 5 7 14 16 17 We can obtain some basic information about the support vector classifier fit using the command: summary() > summary(svmfit) Call: ∼ ., data = dat, kernel = "linear", cost = 10, svm(formula = y scale = FALSE) Parameters: SVM-Type: C- classification SVM-Kernel: linear cost: 10 gamma: 0.5 Number of Support Vectors: 7 (43) Number of Classes: 2 Levels: -1 1 This tells us, for instance, that a linear kernel was used with cost=10 ,and that there were seven support vector s, four in one class and three in the other. What if we instead used a smaller value of the cost parameter? > svmfit=svm(y ∼ ., data=dat, kernel="linear", cost =0.1, scale=FALSE) > plot(svmfit , dat) > svmfit$index [1]1234579101213141516171820

376 9.6 Lab: Support Vector Machines 361 Now that a smaller value of the cost parameter is being used, we obtain a larger number of support vectors, beca use the margin is now wider. Unfor- tunately, the function does not explicitly output the coefficients of svm() the linear decision boundary obtained when the support vector classifier is fit, nor does it output the width of the margin. library includes a built-in function, tune() e1071 , to perform cross- The tune() validation. By default, tune() performs ten-fold cross-validation on a set of models of interest. In order to use this function, we pass in relevant information about the set of models that are under consideration. The following command indicates that we want to compare SVMs with a linear cost parameter. kernel, using a range of values of the > set.seed(1) > tune.out=tune(svm,y ∼ .,data=dat,kernel="linear", ranges=list(cost=c (0.001, 0.01, 0.1, 1,5,10,100))) We can easily access the cross-validat ion errors for each of these models using the summary() command: > summary(tune.out) Parameter tuning of ’svm’: - sampling method: 10-fold cross validation - best parameters : cost 0.1 - best performance: 0.1 - Detailed performance results: cost error dispersion 1 1e-03 0.70 0.422 2 1e-02 0.70 0.422 3 1e-01 0.10 0.211 4 1e+00 0.15 0.242 5 5e+00 0.15 0.242 6 1e+01 0.15 0.242 7 1e+02 0.15 0.242 We see that cost=0.1 results in the lowest cross-validation error rate. The tune() function stores the best model obtained, which can be accessed as follows: > bestmod=tune.out$best.model > summary(bestmod) The predict() function can be used to predict the class label on a set of test observations, at any given value of the cost parameter. We begin by generating a test data set. > xtest=matrix(rnorm(20*2), ncol=2) > ytest=sample(c(-1,1), 20, rep=TRUE) > xtest[ytest==1,]=xtest[ytest==1,] + 1 > testdat=data.frame(x= xtest, y=as. factor(ytest)) Now we predict the class labels of thes e test observations. Here we use the best model obtained through cross-validation in order to make predictions.

377 362 9. Support Vector Machines > ypred=predict(bestmod ,testdat) > table(predict=ypred, truth=testdat$y) truth predict -1 1 -1 11 1 108 Thus, with this value of cost , 19 of the test observations are correctly cost=0.01 ? classified. What if we had instead used ∼ ., data=dat, kernel="linear", cost =.01, > svmfit=svm(y scale=FALSE) > ypred=predict(svmfit ,testdat) > table(predict=ypred, truth=testdat$y) truth predict -1 1 -1 11 2 107 In this case one additional observation is misclassified. Now consider a situation in which the two classes are linearly separable. svm() function. We Then we can find a separating hyperplane using the first further separate the two classes in our simulated data so that they are linearly separable: > x[y==1,]=x[y==1,]+0.5 > plot(x, col=(y+5)/2, pch=19) Now the observations are just barely linearly separable. We fit the support vector classifier and plot the resulting hyperplane, using a very large value so that no observations are misclassified. cost of > dat=data.frame(x=x,y=as.factor(y)) ∼ > svmfit=svm(y ., data=dat, kernel="linear", cost=1e5) > summary(svmfit) Call: svm(formula = y ∼ ., data = dat, kernel = "linear", cost = 1e +05) Parameters: classification SVM-Type: C- SVM-Kernel: linear cost: 1e+05 gamma: 0.5 Number of Support Vectors: 3 (12) Number of Classes: 2 Levels: -1 1 > plot(svmfit , dat) No training errors were made and only three support vectors were used. However, we can see from the figure that the margin is very narrow (because the observations that are not support vectors, indicated as circles, are very

378 9.6 Lab: Support Vector Machines 363 close to the decision boundary). It seems likely that this model will perform poorly on test data. We now try a smaller value of cost : ., data=dat, kernel="linear", cost=1) ∼ > svmfit=svm(y > summary(svmfit) > plot(svmfit ,dat) , we misclassify a training observation, but we also obtain Using cost=1 a much wider margin and make use of seven support vectors. It seems likely that this model will perform better on test data than the model with cost=1e5 . 9.6.2 Support Vector Machine svm() In order to fit an SVM using a non-linear kernel, we once again use the kernel function. However, now we use a different value of the parameter . kernel="polynomial" ,and To fit an SVM with a polynomial kernel we use to fit an SVM with a radial kernel we use kernel="radial" .Intheformer degree argument to specify a degree for the polynomial case we also use the kernel (this is d in (9.22)), and in the latter case we use gamma to specify a value of γ for the radial basis kernel (9.24). We first generate some data with a non-linear class boundary, as follows: > set.seed(1) > x=matrix(rnorm(200*2), ncol=2) > x[1:100,]=x[1:100,]+2 > x[101:150,]=x[101:150,]-2 > y=c(rep(1,150),rep(2,50)) > dat=data.frame(x=x,y=as.factor(y)) Plotting the data makes it clear that the class boundary is indeed non- linear: > plot(x, col=y) The data is randomly split into training and testing groups. We then fit svm() function with a radial kernel and γ =1: the training data using the > train=sample(200,100) > svmfit=svm(y ∼ ., data=dat[train,], kernel="radial", gamma=1, cost=1) > plot(svmfit , dat[train ,]) The plot shows that the resulting SVM has a decidedly non-linear boundary. The summary() functioncanbeusedtoobtainsome information about the SVM fit: > summary(svmfit) Call: svm(formula = y ∼ ., data = dat, kernel = "radial", gamma = 1, cost = 1) Parameters: SVM-Type: C- classification

379 364 9. Support Vector Machines SVM-Kernel: radial cost: 1 gamma: 1 Number of Support Vectors: 37 (1720) Number of Classes: 2 Levels: 12 We can see from the figure that there are a fair number of training errors , we can reduce the number cost in this SVM fit. If we increase the value of of training errors. However, this comes at the price of a more irregular decision boundary that seems to be at risk of overfitting the data. ∼ ., data=dat[train,], kernel="radial",gamma=1, > svmfit=svm(y cost=1e5) > plot(svmfit ,dat[train ,]) tune() We can perform cross-validation using to select the best choice of γ and for an SVM with a radial kernel: cost > set.seed(1) > tune.out=tune(svm, y ∼ ., data=dat[train ,], kernel="radial", ranges=list(cost=c(0.1,1,10,100,1000), gamma=c(0.5,1,2,3,4))) > summary(tune.out) Parameter tuning of ’svm’: - sampling method: 10-fold cross validation - best parameters : cost gamma 12 - best performance: 0.12 - Detailed performance results: cost gamma error dispersion 1 1e-01 0.5 0.27 0.1160 2 1e+00 0.5 0.13 0.0823 3 1e+01 0.5 0.15 0.0707 4 1e+02 0.5 0.17 0.0823 5 1e+03 0.5 0.21 0.0994 6 1e-01 1.0 0.25 0.1354 7 1e+00 1.0 0.13 0.0823 ... Therefore, the best choic e of parameters involves cost=1 and gamma=2 .We predict() can view the test set predictions for this model by applying the function to the data. Notice that to do this we subset the dataframe dat using -train as an index set. > table(true=dat[-train ,"y"], pred=predict(tune.out$best. model, newx=dat[-train ,])) 39 % of test observations are misclassified by this SVM.

380 9.6 Lab: Support Vector Machines 365 9.6.3 ROC Curves ROCR package can be used to produce ROC curves such as those in The Figures 9.10 and 9.11. We first write a short function to plot an ROC curve pred ,and given a vector containing a numerical score for each observation, a vector containing the class label for each observation, truth . > library(ROCR) > rocplot=function(pred, truth, ...){ + predob = prediction (pred, truth) + perf = performance(predob , "tpr", "fpr") + plot(perf,...)} SVMs and support vector classifiers output class labels for each observa- for each observation, fitted values tion. However, it is also possible to obtain which are the numerical scores used to obtain the class labels. For instance, in the case of a support vector classifier, the fitted value for an observation T ˆ ˆ ˆ ˆ . takes the form X β + ,X β X + ,...,X β X + ... + ) β X X =( 2 1 p 1 0 2 1 2 p p For an SVM with a non-linear kernel, the equation that yields the fitted value is given in (9.23). In essence, the sign of the fitted value determines on which side of the decision boundary the observation lies. Therefore, the relationship between the fitted value and the class prediction for a given ue exceeds zero then the observation observation is simple: if the fitted val is assigned to one class, and if it is less than zero than it is assigned to the other. In order to obtain the fitted values for a given SVM model fit, we decision.values=TRUE when fitting svm() . Then the predict() function use will output the fitted values. > svmfit.opt=svm(y ∼ ., data=dat[train,], kernel="radial", gamma=2, cost=1,decision.values=T) train ,], decision. > fitted=attributes(predict(svmfit.opt,dat[ values=TRUE))$decision.values Now we can produce the ROC plot. > par(mfrow=c(1,2)) > rocplot(fi tted ,dat[train ,"y"], main="Training Data") γ we can SVM appears to be producing accurate predictions. By increasing produce a more flexible fit and generate further improvements in accuracy. ∼ ., data=dat[train ,], kernel="radial", > svmfit.flex=svm(y gamma=50, cost=1, decision.values=T) > fitted=attributes(predict(svmfit.flex,dat[ train ,], decision. values=T))$decision.values > rocplot(fi tted ,dat[train ,"y"],add=T,col="red") However, these ROC curves are all on the training data. We are really more interested in the level of predic tion accuracy on the test data. When we compute the ROC curves on the test data, the model with γ = 2 appears to provide the most accurate results.

381 366 9. Support Vector Machines train ,], decision. > fitted=attributes(predict(svmfit.opt,dat[- values=T))$decision.values tted ,dat[-train ,"y"], main="Test Data") > rocplot(fi > fitted=attributes(predict(svmfit.flex,dat[- train ,], decision. values=T))$decision.values tted ,dat[-train ,"y"],add=T,col="red") > rocplot(fi 9.6.4 SVM with Multiple Classes If the response is a factor containing more than two levels, then the svm() function will perform multi-class classification using the one-versus-one ap- proach. We explore that setting here by generating a third class of obser- vations. > set.seed(1) > x=rbind(x, matrix(rnorm(50*2), ncol=2)) > y=c(y, rep(0,50)) > x[y==0,2]=x[y==0 ,2]+2 > dat=data.frame(x=x, y=as.factor(y)) > par(mfrow=c(1,1)) > plot(x,col=(y+1)) We now fit an SVM to the data: > svmfit=svm(y ∼ ., data=dat, kernel="radial", cost=10, gamma=1) > plot(svmfit , dat) The e1071 library can also be used to perform support vector regression, svm() is numerical rather than a if the response vector that is passed in to factor. 9.6.5 Application to Gene Expression Data Khan data set, which consists of a number of tissue We now examine the samples corresponding to four distinct types of small round blue cell tu- mors. For each tissue sample, gene expr ession measurements are available. xtrain and ytrain , and testing data, The data set consists of training data, xtest and ytest . We examine the dimension of the data: > library(ISLR) > names(Khan) [1] "xtrain" "xtest" "ytrain" "ytest" > dim(Khan$xtrain ) [1] 63 2308 > dim(Khan$xtest) [1] 20 2308 > length(Khan$ytrain ) [1] 63 > length(Khan$ytest) [1] 20

382 9.6 Lab: Support Vector Machines 367 , 308 genes. This data set consists of expression measurements for 2 The training and test sets consist of 63 and 20 observations respectively. > table(Khan$ytrain) 1234 8231220 > table(Khan$ytest) 1234 3665 We will use a support vector approach to predict cancer subtype using gene expression measurements. In this data set, there are a very large number of features relative to the number of observations. This suggests that we should use a linear kernel, because the additional flexibility that will result from using a polynomial or radial kernel is unnecessary. > dat=data.frame(x=Khan$xtrain , y=as.factor(Khan$ytrain )) ∼ ., data=dat, kernel="linear",cost=10) > out=svm(y > summary(out) Call: ∼ ., data = dat, kernel = "linear", svm(formula = y cost = 10) Parameters: SVM-Type: C- classification SVM-Kernel: linear cost: 10 gamma: 0.000433 Number of Support Vectors: 58 (2020117) Number of Classes: 4 Levels: 1234 > table(out$fitted , dat$y) 1234 18000 202300 300120 400020 We see that there are no training errors. In fact, this is not surprising, because the large number of variables relative to the number of observations implies that it is easy to find hyperplanes that fully separate the classes. We are most interested not in the support vector classifier’s performance on the training observations, but rather its performance on the test observations. > dat.te=data.frame(x=Khan$xtest , y=as.factor(Khan$ytest)) > pred.te=predict(out, newdata=dat.te) > table(pred.te, dat.te$y) pred.te 1 2 3 4 13000 20620 30040 40005

383 368 9. Support Vector Machines cost=10 We see that using yields two test set errors on this data. 9.7 Exercises Conceptual 1. This problem involves hyperplanes in two dimensions. − X = 0. Indicate the set of X (a) Sketch the hyperplane 1 + 3 1 2 X pointsforwhich1+3 − > 0, as well as the set of points X 1 2 − X < 0. X forwhich1+3 1 2 2+ (b) On the same plot, sketch the hyperplane − X X =0. +2 2 1 0, as well +2 X > 2+ X Indicate the set of points for which − 2 1 − X as the set of points for which 2+ +2 X < 0. 1 2 = 2 dimensions, a linear decision boundary 2. We have seen that in p + β = 0. We now investigate a non-linear X X + β takes the form β 2 1 2 1 0 decision boundary. (a) Sketch the curve 2 2 X (1 + − X ) ) . =4 +(2 2 1 (b) On your sketch, indicate the set of points for which 2 2 (1 + X +(2 − X , ) ) > 4 2 1 as well as the set of points for which 2 2 (1 + X +(2 − X . ) ) ≤ 4 2 1 (c) Suppose that a classifier assigns an observation to the blue class if 2 2 (1 + X − +(2 , X 4 ) ) > 2 1 and to the red class otherwise. To what class is the observation , 0) classified? ( − 1 , 1)? (2 , 2)? (3 , 8)? (0 (d) Argue that while the decision boundary in (c) is not linear in 2 terms of X and X X , it is linear in terms of X , , X ,and 2 2 1 1 1 2 X . 2 3. Here we explore the maximal margin classifier on a toy data set. (a) We are given n = 7 observations in p = 2 dimensions. For each observation, there is an associated class label.

384 9.7 Exercises 369 X X Y Obs. 2 1 134Red 222Red 344Red 414Red 2 1 Blue 5 4 3 Blue 6 7 4 1 Blue Sketch the observations. (b) Sketch the optimal separating hyperplane, and provide the equa- tion for this hyperplane (of the form (9.1)). (c) Describe the classification rule for the maximal margin classifier. It should be something along the lines of “Classify to Red if β 0, and classify to Blue otherwise.” Provide + β > X X + β 2 1 1 0 2 ,and , β . β the values for β 0 1 2 (d) On your sketch, indicate the margin for the maximal margin hyperplane. (e) Indicate the support vectors for the maximal margin classifier. (f) Argue that a slight movement of the seventh observation would not affect the maximal margin hyperplane. (g) Sketch a hyperplane that is not the optimal separating hyper- plane, and provide the equation for this hyperplane. (h) Draw an additional observation on the plot so that the two classes are no longer separable by a hyperplane. Applied 4. Generate a simulated two-class data set with 100 observations and two features in which there is a visible but non-linear separation be- tween the two classes. Show that in this setting, a support vector machine with a polynomial kernel (with degree greater than 1) or a radial kernel will outperform a support vector classifier on the train- ing data. Which technique performs best on the test data? Make plots and report training and test error rates in order to back up your assertions. 5. We have seen that we can fit an SVM with a non-linear kernel in order to perform classification using a non-linear decision boundary. We will now see that we can also obtain a non-linear decision boundary by performing logistic regression using non-linear transformations of the features.

385 370 9. Support Vector Machines n (a) Generate a data set with = 2, such that the obser- = 500 and p vations belong to two classes with a quadratic decision boundary between them. For instance, you can do this as follows: > x1=runif(500)-0.5 > x2=runif(500)-0.5 > y=1*(x1^2-x2^2 > 0) (b) Plot the observations, colored according to their class labels. Your plot should display X on the x -axis, and X - on the y 2 1 axis. and X as X (c) Fit a logistic regression model to the data, using 2 1 predictors. training data (d) Apply this model to the in order to obtain a pre- dicted class label for each train ing observation. Plot the ob- servations, colored according to the predicted class labels. The decision boundary should be linear. (e) Now fit a logistic regression model to the data using non-linear 2 X ,log( and X X as predictors (e.g. X , × ), X X functions of 1 1 2 2 2 1 and so forth). training data in order to obtain a pre- (f) Apply this model to the ing observation. Plot the ob- dicted class label for each train predicted class labels. The servations, colored according to the decision boundary should be obviously non-linear. If it is not, then repeat (a)-(e) until you come up with an example in which the predicted class labels are obviously non-linear. X (g) Fit a support vector classifier to the data with X and as 2 1 predictors. Obtain a class prediction for each training observa- tion. Plot the observations, colored according to the predicted . class labels (h) Fit a SVM using a non-linear kernel to the data. Obtain a class prediction for each training observation. Plot the observations, predicted class labels . colored according to the (i) Comment on your results. 6. At the end of Section 9.6.1, it is claimed that in the case of data that is just barely linearly separable, a support vector classifier with a that misclassifies a couple of training observations cost small value of may perform better on test data than one with a huge value of cost that does not misclassify any training observations. You will now investigate this claim. (a) Generate two-class data with p = 2 in such a way that the classes are just barely linearly separable.

386 9.7 Exercises 371 (b) Compute the cross-validation error rates for support vector classifiers with a range of cost values. How many training er- rors are misclassified for each value of considered, and how cost does this relate to the cross-validation errors obtained? (c) Generate an appropriate tes t data set, and compute the test considered. cost errors corresponding to each of the values of leads to the fewest test errors, and how cost Which value of does this compare to the values of cost that yield the fewest training errors and the fewest cross-validation errors? (d) Discuss your results. 7. In this problem, you will use support vector approaches in order to predict whether a given car gets high or low gas mileage based on the data set. Auto (a) Create a binary variable that takes on a 1 for cars with gas mileage above the median, and a 0 for cars with gas mileage below the median. (b) Fit a support vector classifier to the data with various values cost , in order to predict whether a car gets high or low gas of mileage. Report the cross-validation errors associated with dif- ferent values of this parameter. Comment on your results. (c) Now repeat (b), this time using SVMs with radial and polyno- gamma and degree and mial basis kernels, with different values of cost . Comment on your results. (d) Make some plots to back up your assertions in (b) and (c). Hint: In the lab, we used the function for objects plot() svm plot() =2 .When only in cases with 2 ,youcanusethe p p> function to create plots displaying pairs of variables at a time. Essentially, instead of typing > plot(svmfit , dat) where svmfit contains your fitted model and dat is a data frame containing your data, you can type ∼ x4) > plot(svmfit , dat, x1 in order to plot just the first and fourth variables. However, you must replace x1 and x4 with the correct variable names. To find ?plot.svm . out more, type 8. This problem involves the OJ data set which is part of the ISLR package.

387 372 9. Support Vector Machines (a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations. (b) Fit a support vector classifier to the training data using , with Purchase cost=0.01 as the response and the other variables as predictors. Use the summary() function to produce summary statistics, and describe the results obtained. (c) What are the training and test error rates? . Consider val- tune() function to select an optimal cost (d) Use the 01 to 10. ues in the range 0 . (e) Compute the training and test error rates using this new value cost . for (f) Repeat parts (b) through (e) using a support vector machine gamma . with a radial kernel. Use the default value for (g) Repeat parts (b) through (e) using a support vector machine with a polynomial kernel. Set degree=2 . (h) Overall, which approach seems to give the best results on this data?

388 10 Unsupervised Learning methods such as Most of this book concerns supervised learning regression and classification. In the supervised learning setting, we typically obser- n ,measuredon ,X ,...,X X features p have access to a set of 2 1 p also measured on those same n observations. vations, and a response Y ,X . ,...,X The goal is then to predict Y using X p 2 1 This chapter will instead focus on unsupervised learning ,asetofsta- tistical tools intended for the setting in which we have only a set of fea- observations. We are not interested ,X n ,...,X measured on tures X 1 2 p in prediction, because we do not have an associated response variable Y . Rather, the goal is to discover interesting things about the measurements ,X ,...,X . Is there an informative way to visualize the data? Can on X 1 2 p we discover subgroups among the variables or among the observations? Unsupervised learning refers to a diverse set of techniques for answering questions such as these. In this chapter, we will focus on two particu- lar types of unsupervised learning: principal components analysis ,atool used for data visualization or data pre-processing before supervised tech- clustering , a broad class of methods for discovering niques are applied, and unknown subgroups in data. 10.1 The Challenge of Unsupervised Learning Supervised learning is a well-understood area. In fact, if you have read the preceding chapters in this boo k, then you should by now have a good G. James et al., An Introduction to Statistical Learning: with Applications in R , 373 10, Springer Texts in Statistics 103, DOI 10.1007/978-1-4614-7138-7 © Springer Science+Business Media New York 2013

389 374 10. Unsupervised Learning grasp of supervised learning. For instance, if you are asked to predict a binary outcome from a data set, you have a very well developed set of tools at your disposal (such as logistic regression, linear discriminant analysis, classification trees, support vector ma chines, and more) as well as a clear understanding of how to assess the quality of the results obtained (using cross-validation, validation on an independent test set, and so forth). In contrast, unsupervised learning i s often much more challenging. The exercise tends to be more subjective, and there is no simple goal for the analysis, such as prediction of a response. Unsupervised learning is often exploratory data analysis .Furthermore,itcanbe performed as part of an exploratory hard to assess the results obtained from unsupervised learning methods, data analysis since there is no universally accepte d mechanism for performing cross- validation or validating results on an independent data set. The reason for this difference is simple. If we fit a predictive model using a supervised check our work by seeing how learning technique, then it is possible to well our model predicts the response on observations not used in fitting Y the model. However, in unsupervised l earning, there is no way to check our work because we don’t know the true answer—the problem is unsupervised. Techniques for unsupervised learning are of growing importance in a number of fields. A cancer researcher might assay gene expression levels in 100 patients with breast cancer. He or she might then look for subgroups among the breast cancer samples, or among the genes, in order to obtain a better understanding of the disease. An online shopping site might try to identify groups of shoppers with similar browsing and purchase histo- ries, as well as items that are of parti cular interest to the shoppers within each group. Then an individual shopper can be preferentially shown the y likely to be interested, based on items in which he or she is particularl the purchase histories of similar shoppers. A search engine might choose what search results to display to a particular individual based on the click histories of other individuals with similar search patterns. These statistical learning tasks, and many more, can be p erformed via unsupervised learning techniques. 10.2 Principal Components Analysis Principal components are discussed in Section 6.3.1 in the context of principal components regression. When faced with a large set of corre- lated variables, principal components allow us to summarize this set with a smaller number of representative variables that collectively explain most of the variability in the original set. The principal component directions are presented in Section 6.3.1 as directions in feature space along which the original data are highly variable . These directions also define lines and subspaces that are as close as possible to the data cloud. To perform

390 10.2 Principal Components Analysis 375 principal components regression, we simply use principal components as predictors in a regression model in place of the original larger set of vari- ables. (PCA) refers to the process by which prin- Principal component analysis principal cipal components are computed, and the subsequent use of these compo- component analysis nents in understanding the data. PCA is an unsupervised approach, since , and no associated response ,X ,...,X X it involves only a set of features 2 p 1 . Apart from producing derived variables for use in supervised learning Y problems, PCA also serves as a tool for data visualization (visualization of the observations or visualization of the variables). We now discuss PCA in greater detail, focusing on the use of PCA as a tool for unsupervised data exploration, in keeping with the topic of this chapter. 10.2.1 What Are Principal Components? Suppose that we wish to visualize n observations with measurements on a , as part of an exploratory data analysis. ,...,X ,X features, p set of X 1 p 2 We could do this by examining two-dimensional scatterplots of the data, each of which contains the observations’ measurements on two of the n ) ( p 1) = p ( p − 2 such scatterplots; for example, / features. However, there are 2 p =10thereare45plots!If p is large, then it will certainly not be with possible to look at all of them; moreover, most likely none of them will be informative since they each contai njustasmallfractionofthetotal information present in the data set. Clearly, a better method is required to n observations when visualize the is large. In particular, we would like to p find a low-dimensional representation of the data that captures as much of the information as possible. For instance, if we can obtain a two-dimensional representation of the data that captures most of the information, then we can plot the observations in this low-dimensional space. PCA provides a tool to do just this. It finds a low-dimensional represen- tation of a data set that contains as much as possible of the variation. The p -dimensional space, but not idea is that each of the n observations lives in all of these dimensions are equally interesting. PCA seeks a small number of dimensions that are as interesting as possible, where the concept of in- teresting is measured by the amount that the observations vary along each dimension. Each of the dimensions found by PCA is a linear combination features. We now explain the manner in which these dimensions, of the p , are found. principal components or ,X is the ,...,X The first principal component of a set of features X p 1 2 normalized linear combination of the features = φ X (10.1) X φ X + ... + φ + Z 1 21 1 p 1 2 p 11 ∑ p 2 =1. φ normalized that has the largest variance. By ,wemeanthat 1 j =1 j We refer to the elements φ of the first principal ,...,φ loadings as the 1 p 11 loading

391 376 10. Unsupervised Learning component; together, the loadings make up the principal component load- T ing vector, φ φ ... φ =( . We constrain the loadings so that ) φ 1 11 1 21 p their sum of squares is equal to one, since otherwise setting these elements to be arbitrarily large in absolute value could result in an arbitrarily large variance. data set X , how do we compute the first principal com- × n Given a p d in variance, we assume that each of ponent? Since we are only intereste the variables in X has been centered to have mean zero (that is, the col- umn means of are zero). We then look for the linear combination of the X sample feature values of the form + + φ = + ... x φ φ x (10.2) x z 1 i 1 1 p i 21 ip 11 i 2 ∑ p 2 =1. φ that has largest sample variance, subject to the constraint that j 1 j =1 In other words, the first principal component loading vector solves the op- timization problem ⎫ ⎧ ⎛ ⎞ 2 ⎪ ⎪ p p n ⎬ ⎨ ∑ ∑ ∑ 1 2 ⎝ ⎠ subject to φ =1 x (10.3) φ . maximize j ij 1 1 j ⎪ ⎪ ,...,φ φ n 1 11 p ⎭ ⎩ =1 =1 j =1 i j ∑ n 1 2 From (10.2) we can write the objective in (10.3) as .Since z 1 i =1 i n ∑ n 1 ,...,z z will be zero as well. Hence = 0, the average of the x ij 11 n 1 =1 i n the objective that we are maximizing in (10.3) is just the sample variance of of the first princi- . We refer to z scores ,...,z as the z n the values of 1 i 1 n 11 score pal component. Problem (10.3) can be solved via an eigen decomposition, a standard technique in linear algebra, but details are outside of the scope of this book. There is a nice geometric i nterpretation for the first principal component. defines a direction in ,...,φ with elements φ ,φ φ The loading vector 11 1 p 1 21 n data feature space along which the data vary the most. If we project the onto this direction, the proj ,...,x ected values are the princi- x points n 1 z pal component scores ,...,z themselves. For instance, Figure 6.14 on n 11 1 page 230 displays the first principal component loading vector (green solid line) on an advertising data set. In these data, there are only two features, and so the observations as well as the first principal component loading vector can be easily displayed. As can be seen from (6.19), in that data set 544. . =0 . 839 and φ =0 φ 21 11 Z After the first principal component of the features has been deter- 1 mined, we can find the second principal component Z . The second prin- 2 ,...,X that has maximal cipal component is the linear combination of X 1 p variance out of all linear combinations that are uncorrelated with Z .The 1 take the form ,...,z ,z z second principal component scores 12 n 2 22 z , x = φ φ x + ... (10.4) φ + x + i 22 1 i 12 2 2 2 ip i p

392 10.2 Principal Components Analysis 377 PC2 PC1 − Murder 0.4181809 0.5358995 0.1879856 Assault 0.5831836 − UrbanPop 0.2781909 0.8728062 Rape 0.5434321 0.1673186 TABLE 10.1. The principal component loading vectors, φ φ ,forthe and 2 1 USArrests data. These are also displayed in Figure 10.1. where is the second principal component loading vector, with elements φ 2 to be uncorrelated with ,φ ,...,φ Z . It turns out that constraining φ 2 2 22 12 p Z φ to be orthogonal (perpen- is equivalent to constraining the direction 1 2 . In the example in Figure 6.14, the observations φ dicular) to the direction 1 p φ = 2), and so once we have found lie in two-dimensional space (since , 1 there is only one possibility for φ , which is shown as a blue dashed line. 2 839.) But in . =0 . 544 and φ 0 = − φ (From Section 6.3.1, we know that 22 12 p> 2 variables, there are multiple distinct principal a larger data set with φ components, and they are defined in a similar manner. To find ,wesolve 2 φ replacing , and with the additional a problem similar to (10.3) with φ 2 1 1 φ constraint that is orthogonal to φ . 1 2 Once we have computed the princip al components, we can plot them against each other in order to produce low-dimensional views of the data. For instance, we can plot the score vector Z against Z , Z , against Z 3 2 1 1 against Z , and so forth. Geometrically, this amounts to projecting Z 2 3 the original data down onto the subspace spanned by φ , φ ,and ,and φ 3 2 1 plotting the projected points. data set. For each of the USArrests We illustrate the use of PCA on the 50 states in the United States, the data set contains the number of arrests Rape Assault . , Murder ,and , per 100 000 residents for each of three crimes: We also record UrbanPop (the percent of the population in each state living in urban areas). The principal comp onent score vectors have length = 50, n p and the principal component loading vectors have length =4.PCAwas performed after standardizing each variable to have mean zero and standard deviation one. Figure 10.1 plots the first two principal components of these data. The figure represents both the principal component scores and the biplot display. The loadings are also given in loading vectors in a single biplot Table 10.1. In Figure 10.1, we see that the first loading vector places approximately equal weight on Assault , Murder ,and Rape , with much less weight on 1 On a technical note, the principal component directions , φ , φ ,... are the φ 3 2 1 T X , and the variances of the compo- X ordered sequence of eigenvectors of the matrix nents are the eigenvalues. There are at most min( n − 1 ,p ) principal components.

393 378 10. Unsupervised Learning 0.5 0.0 −0.5 3 UrbanPop 2 0.5 Hawaii California Rhode Island Massachusetts Utah New Jersey Connecticut 1 Colorado Washington New York Nevada Arizona Ohio Illinois Minnesota Wisconsin Pennsylvania Rape Oregon Texas Delaware Oklahoma Kansas Missouri Nebraska Michigan Indiana Iowa New Hampshire 0 Florida 0.0 New Mexico Virginia Idaho Wyoming Maine Maryland Montana rth Dakota Assault South Dakota Tennessee Louisiana Kentucky −1 Alaska Second Principal Component Arkansas Alabama Georgia Vermont West Virginia Murder −0.5 South Carolina −2 North Carolina Mississippi −3 0 −2 −3 1 2 3 −1 First Principal Component The first two principal components for the USArrests data. The FIGURE 10.1. blue state names represent the scores for the first two principal components. The orange arrows indicate the first two principal component loading vectors (with Rape on the first com- axes on the top and right). For example, the loading for 0 . ponent is , and its loading on the second principal component 0 . 17 (the word 54 Rape is centered at the point (0 . 54 , 0 . 17) ). This figure is known as a biplot, be- cause it displays both the principal component scores and the principal component loadings. . Hence this component roughly co rresponds to a measure of overall UrbanPop rates of serious crimes. T he second loading vector pl aces most of its weight on UrbanPop and much less weight on the other three features. Hence, this component roughly corresponds to the level of urbanization of the state. ) Murder , Assault ,and Rape Overall, we see that the crime-related variables ( are located close to each other, and that the UrbanPop variable is far from the other three. This indicates that the crime-related variables are corre- lated with each other—states with high murder rates tend to have high UrbanPop variable is less correlated assault and rape rates—and that the with the other three.

394 10.2 Principal Components Analysis 379 e states via the two principal com- We can examine differences between th ponent score vectors shown in Figure 10.1. Our discussion of the loading vectors suggests that states with large positive scores on the first compo- nent, such as California, Nevada and Florida, have high crime rates, while states like North Dakota, with negative scores on the first component, have low crime rates. California also has a high score on the second component, indicating a high level of urbanization, while the opposite is true for states like Mississippi. States close to zero o n both components, such as Indiana, have approximately average levels of both crime and urbanization. 10.2.2 Another Interpretation of Principal Components The first two principal component loading vectors in a simulated three- dimensional data set are shown in the left-hand panel of Figure 10.2; these two loading vectors span a plane along which the observations have the highest variance. the principal component loading vec- In the previous section, we describe tors as the directions in feature space along which the data vary the most, and the principal component scores as projections along these directions. However, an alternative interpretation for principal components can also be useful: principal components provid e low-dimensional linear surfaces that are closest to the observations. We expand upon that interpretation here. The first principal component loading vector has a very special property: it is the line in -dimensional space that is closest to the n observations p (using average squared Euclidean distance as a measure of closeness). This interpretation can be seen in the left-hand panel of Figure 6.15; the dashed lines indicate the distance between ea ch observation and the first principal component loading vector. The appeal of this interpretation is clear: we seek a single dimension of the data that lies as close as possible to all of the data points, since such a line will likely provide a good summary of the data. The notion of principal components as the dimensions that are clos- n observations extends beyond just the first principal com- est to the ponent. For instance, the first two principal components of a data set span the plane that is closest to the n observations, in terms of average squared Euclidean distance. An example is shown in the left-hand panel of Figure 10.2. The first three principal components of a data set span the three-dimensional hyperplane that is closest to the n observations, and so forth. M principal component score Using this interpretation, together the first vectors and the first M principal component loading vectors provide the best M -dimensional approximation (in terms of Euclidean distance) to . This representation can be written the i th observation x ij

395 380 10. Unsupervised Learning 1.0 0.5 0.0 Second principal component −0.5 −1.0 0.0 0.5 1.0 −1.0 −0.5 First principal component Ninety observations simulated in three dimensions. Left: the FIGURE 10.2. first two principal component directions span the plane that best fits the data. It minimizes the sum of squared distances from each point to the plane. Right: the first two principal component score vectors give the coordinates of the projection of the 90 observations onto the plane. The variance in the plane is maximized. M ∑ (10.5) x ≈ φ z im ij jm m =1 (assuming the original data matrix X is column-centered). In other words, together the M principal component score vectors and M principal com- ponent loading vectors can give a good approximation to the data when M is sufficiently large. When = min( n − 1 ,p ), then the representation M ∑ M x is exact: = . φ z im ij jm m =1 10.2.3 More on PCA Scaling the Variables We have already mentioned that before PCA is performed, the variables should be centered to have mean zero. Furthermore, the results obtained when we perform PCA will also depend on whether the variables have been individually scaled (each multiplied by a different constant). This is in contrast to some other supervised and unsupervised learning techniques, such as linear regression, in which scaling the variables has no effect. (In linear regression, multiplying a variable by a factor of c will simply lead to multiplication of the corresponding coefficient estimat e by a factor of 1 /c , and thus will have no substantive effect on the model obtained.) For instance, Figure 10.1 was obtained after scaling each of the variables to have standard deviation one. This is reproduced in the left-hand plot in Figure 10.3. Why does it matter that we scaled the variables? In these data,

396 10.2 Principal Components Analysis 381 Unscaled Scaled 0.5 −0.5 0.0 −0.5 0.5 0.0 1.0 3 UrbanPop UrbanPop 1.0 150 2 0.5 100 * * * * * * 0.5 1 * * * * * * * * Rape 50 * * * * * Rape * * * * * * * * * * 0 * * * * * 0.0 * * * * * * * * * * * * * * * * * * * 0 * * * * * * * * * * 0.0 * * Murder * * * * u Assa * * * * * Assault * * * * * * * * * * * * * * * * −1 * * * * * Murder * * −50 −0.5 Second Principal Component Second Principal Component * −2 −0.5 * * −100 −3 2 −2 −3 150 0 −100 −50 0 3 50 100 −1 1 First Principal Component First Principal Component Two principal component biplots for the data. Left: FIGURE 10.3. USArrests the same as Figure 10.1, with the variables scaled to have unit standard deviations. Right: Assault has by far the largest principal components using unscaled data. loading on the first principal component because it has the highest variance among the four variables. In general, scaling the variables to have standard deviation one is recommended. the variables are measu red in different units; , Rape ,and Assault are Murder , reported as the number of occurrences per 100 000 people, and UrbanPop is the percentage of the state’s population that lives in an urban area. . 97, 87 . 73, 6945 These four variables have variance 18 16, and 209 . 5, respec- . tively. Consequently, if we perform PCA on the unscaled variables, then the first principal component loading vector will have a very large loading Assault , since that variable has by far the highest variance. The right- for hand plot in Figure 10.3 displays the first two principal components for the USArrests data set, without scaling the variables to have standard devia- tion one. As predicted, the first principal component loading vector places Assault , while the second principal component almost all of its weight on loading vector places almost all of its weight on . Comparing this UrpanPop to the left-hand plot, we see that scaling does indeed have a substantial effect on the results obtained. However, this result is simply a consequence of the scales on which the were measured in units Assault variables were measured. For instance, if of the number of occurrences per 100 people (rather than number of oc- currences per 100 , 000 people), then this would amount to dividing all of the elements of that variable by 1 , 000. Then the variance of the variable would be tiny, and so the first principal component loading vector would have a very small value for that variable. Because it is undesirable for the principal components obtained to depend on an arbitrary choice of scaling, we typically scale each variable to have standard deviation one before we perform PCA.

397 382 10. Unsupervised Learning In certain settings, however, the variables may be measured in the same units. In this case, we might not wish to scale the variables to have stan- dard deviation one before performing PCA. For instance, suppose that the genes. variables in a given data set correspond to expression levels for p Then since expression is measured in the same “units” for each gene, we might choose not to scale the genes to each have standard deviation one. Uniqueness of the Principal Components Each principal component loading vector is unique, up to a sign flip. This means that two different software packages will yield the same principal component loading vectors, although the signs of those loading vectors may differ. The signs may differ because each principal component loading -dimensional space: flipping the sign has no p vector specifies a direction in effect as the direction does not change. (Consider Figure 6.14—the principal component loading vector is a line tha t extends in either direction, and flipping its sign would have no effect.) Similarly, the score vectors are unique Z is the same as the variance of up to a sign flip, since the variance of Z . − we multiply x It is worth noting that when we use (10.5) to approximate ij z by φ . Hence, if the sign is flipped on both the loading and score jm im vectors, the final product of the two quantities is unchanged. The Proportion of Variance Explained In Figure 10.2, we performed PCA on a three-dimensional data set (left- hand panel) and projected the data onto the first two principal component loading vectors in order to obtain a two-dimensional view of the data (i.e. the principal component score vector s; right-hand panel). We see that this two-dimensional representation of the three-dimensional data does success- fully capture the major pattern in the data: the orange, green, and cyan observations that are near each other in three-dimensional space remain nearby in the two-dimensional representation. Similarly, we have seen on USArrests data set that we can summarize the 50 observations and 4 the variables using just the first two principal component score vectors and the first two principal component loading vectors. We can now ask a natural question: how much of the information in a given data set is lost by projecting the observations onto the first few principal components? That is, how much of the variance in the data is not components? More generally, we are contained in the first few principal proportion of variance explained (PVE) by each interested in knowing the proportion principal component. The total variance present in a data set (assuming of variance explained that the variables have been centered to have mean zero) is defined as p p n ∑ ∑ ∑ 1 2 (10.6) Var( X x , )= j ij n =1 i =1 =1 j j

398 10.2 Principal Components Analysis 383 1.0 1.0 0.8 0.8 0.6 0.6 0.4 0.4 Prop. Variance Explained 0.2 0.2 Cumulative Prop. Variance Explained 0.0 0.0 3.0 2.5 2.0 1.5 1.0 4.0 3.5 4.0 3.5 2.5 2.0 1.5 1.0 3.0 Principal Component Principal Component FIGURE 10.4. Left: a scree plot depicting the proportion of variance explained USArrests data. Right: the cu- by each of the four principal components in the mulative proportion of variance explained by the four principal components in the USArrests data. th principal component is and the variance explained by the m ⎛ ⎞ 2 p n n ∑ ∑ ∑ 1 1 2 ⎠ ⎝ x φ z . (10.7) = jm ij im n n j =1 =1 i i =1 m th principal component is given by Therefore, the PVE of the ) ( 2 ∑ ∑ p n φ x jm ij =1 j =1 i ∑ ∑ (10.8) . n p 2 x ij j i =1 =1 The PVE of each principal component is a positive quantity. In order to compute the cumulative PVE of the first M principal components, we M can simply sum (10.8) over each of the first PVEs. In total, there are n − 1 ,p ) principal components, and their PVEs sum to one. min( data, the first principal component explains 62.0 % of USArrests In the the variance in the data, and the next principal component explains 24.7 % of the variance. Together, the first two principal components explain almost 87 % of the variance in the data, and the last two principal components explain only 13 % of the variance. This means that Figure 10.1 provides a pretty accurate summary of the data u sing just two dimensions. The PVE of each principal component, as well as the cumulative PVE, is shown in Figure 10.4. The left-hand panel is known as a , and will be scree plot scree plot discussed next. Deciding How Many Principal Components to Use In general, a n × p data matrix X has min( n − 1 ,p ) distinct principal components. However, we usually are n ot interested in all of them; rather,

399 384 10. Unsupervised Learning we would like to use just the first few principal components in order to visualize or interpret the data. In fact, we would like to use the smallest understanding of the number of principal components required to get a good data. How many principal components are needed? Unfortunately, there is no single (or simple!) answer to this question. We typically decide on the number of principal components required to visualize the data by examining a scree plot , such as the one shown in the left-hand panel of Figure 10.4. We choose the smallest number of principal components that are required in order to explain a sizable amount of the variation in the data. This is done by eyeballing the scree plot, and looking for a point at which the proportion of variance explained by each subsequent principal component drops off. This is often referred to as an elbow in the scree plot. For instance, by inspection of Figure 10.4, one might conclude that a fair amount of variance is explained by the first two principal components, and that there is an elbow after the second component. After all, the third principal component explains less than ten percent of the variance in the data, and the fourth principal component explains less than half that and so is essentially worthless. However, this type of visual analysis is inherently ad hoc . Unfortunately, there is no well-accepted objective way to decide how many principal com- ponents are . In fact, the question of how many principal compo- enough nents are enough is inherently ill-defined, and will depend on the specific area of application and the specific data set. In practice, we tend to look at the first few principal components in order to find interesting patterns in the data. If no interesting patterns are found in the first few principal components, then further principal components are unlikely to be of inter- est. Conversely, if the first few princip al components are interesting, then we typically continue to look at subsequent principal components until no further interesting patterns are found. This is admittedly a subjective ap- proach, and is reflective of the fact that PCA is generally used as a tool for exploratory data analysis. On the other hand, if we compute principal components for use in a supervised analysis, such as the principal components regression presented in Section 6.3.1, then there is a simple and objective way to determine how many principal components to use: we can treat the number of principal component score vectors to be used in t he regression as a tuning parameter to be selected via cross-validation or a related approach. The comparative simplicity of selecting the number of principal components for a supervised analysis is one manifestation of the fact that supervised analyses tend to be more clearly defined and more object ively evaluated than unsupervised analyses.

400 10.3 Clustering Methods 385 10.2.4 Other Uses for Principal Components We saw in Section 6.3.1 that we can perform regression using the principal component score vectors as features. In fact, many statistical techniques, such as regression, classification, and clustering, can be easily adapted to principal com- M matrix whose columns are the first M use the p × n  n × p data matrix. This ponent score vectors, rather than using the full can lead to results, since it is often the case that the signal (as less noisy opposed to the noise) in a data set is co ncentrated in its first few principal components. 10.3 Clustering Methods refers to a very broad set of techniques for finding subgroups ,or Clustering clustering clusters , in a data set. When we cluster the observations of a data set, we seek to partition them into distinct groups so that the observations within each group are quite similar to each other, while observations in different groups are quite different from each other. Of course, to make this concrete, we must define what it means for two or more observations to be similar or different . Indeed, this is often a domain-specific consideration that must be made based on knowledge of the data being studied. For instance, suppose that we have a set of observations, each with p n n observations could correspond to tissue samples for patients features. The p features could correspond to measurements with breast cancer, and the collected for each tissue sample; these c ould be clinical measurements, such as tumor stage or grade, or they could be gene expression measurements. We may have a reason to believe that there is some heterogeneity among the n tissue samples; for instance, perhaps there are a few different un- known ering could be used to find these subtypes of breast cancer. Clust subgroups. This is an unsupervised problem because we are trying to dis- cover structure—in this case, distinct clusters—on the basis of a data set. The goal in supervised problems, on the other hand, is to try to predict some outcome vector such as survival time or response to drug treatment. Both clustering and PCA seek to simplify the data via a small number of summaries, but their mechanisms are different: • PCA looks to find a low-dimensional representation of the observa- tions that explain a good fraction of the variance; • Clustering looks to find homogeneous subgroups among the observa- tions. Another application of clustering arises in marketing. We may have ac- cess to a large number of measurements (e.g. median household income, occupation, distance from nearest urban area, and so forth) for a large

401 386 10. Unsupervised Learning market segmentation by identify- number of people. Our goal is to perform re receptive to a particular form ing subgroups of people who might be mo of advertising, or more likely to purchase a particular product. The task of performing market segmentation amounts to clustering the people in the data set. Since clustering is popular in many fields, there exist a great number of clustering methods. In this section we focus on perhaps the two best-known clustering approaches: K and hierarchical clustering .In -means clustering K -means -means clustering, we seek to partition the observations into a pre-specified K clustering number of clusters. On the other hand, in hierarchical clustering, we do hierarchical clustering not know in advance how many clusters we want; in fact, we end up with a tree-like visual representation of the observations, called a , dendrogram dendrogram that allows us to view at once the clusterings obtained for each possible number of clusters, from 1 to . There are advantages and disadvantages n to each of these clustering approaches, which we highlight in this chapter. In general, we can cluster observations on the basis of the features in order to identify subgroups among the observations, or we can cluster fea- tures on the basis of the observations in order to discover subgroups among the features. In what follows, for simplicity we will discuss clustering obser- vations on the basis of the features, though the converse can be performed by simply transposing the data matrix. 10.3.1 -Means Clustering K K -means clustering is a simple and elegant approach for partitioning a K distinct, non-overlapping clusters. To perform data set into -means K clustering, we must first specify the desired number of clusters ; then the K -means algorithm will assign each observation to exactly one of the K K clusters. Figure 10.5 shows the results obtained from performing K -means clustering on a simulated example consisting of 150 observations in two dimensions, using three different values of K . The K -means clustering procedure results from a simple and intuitive ,...,C mathematical problem. We begin by defining some notation. Let C 1 K denote sets containing the indices of the observations in each cluster. These sets satisfy two properties: . In other words, each observation } ∪ C ,...,n ∪ ... ∪ C 1 = { C 1. K 1 2 belongs to at least one of the K clusters. ′ ′ C 2. ∩ = ∅ for all k  = k . In other words, the clusters are non- C k k overlapping: no observation belongs to more than one cluster. .The i th observation is in the k th cluster, then i ∈ C For instance, if the k idea behind K -means clustering is that a good clustering is one for which the within-cluster variation is as small as possible. The within-cluster variation

402 10.3 Clustering Methods 387 K=2 K=3 K=4 A simulated data set with 150 observations in two-dimensional FIGURE 10.5. space. Panels show the results of applying K -means clustering with different val- ues of , the number of clusters. The color of each observation indicates the clus- K ter to which it was assigned using the K -means clustering algorithm. Note that there is no ordering of the clusters, so the cluster coloring is arbitrary. These cluster labels were not used in clustering; instead, they are the outputs of the clustering procedure. for cluster is a measure W ( C ) of the amount by which the observations C k k within a cluster differ from each other. Hence we want to solve the problem } { K ∑ minimize . ) (10.9) C W ( k ,...,C C K 1 =1 k In words, this formula says that we want to partition the observations into clusters such that the total within-cluster variation, summed over all K K clusters, is as small as possible. Solving (10.9) seems like a reasonable idea, but in order to make it actionable we need to define the within-cluster variation. There are many possible ways to define this concept, but by far the most common choice involves . That is, we define squared Euclidean distance p ∑ ∑ 1 2 ′ )= x − x (10.10) ( , ) C W ( k ij i j | | C k ′ j =1 C ∈ i,i k C | where denotes the number of observations in the k | th cluster. In other k words, the within-cluster variation for the k th cluster is the sum of all of nces between the observations in the the pairwise squared Euclidean dista th cluster, divided by the total number of observations in the th cluster. k k Combining (10.9) and (10.10) gives the optimization problem that defines K -means clustering, ⎫ ⎧ p K ⎨ ⎬ ∑ ∑ ∑ 1 2 ′ ( x ) − x minimize . (10.11) i ij j ,...,C C ⎭ ⎩ | C | 1 K k ′ =1 j ∈ C i,i =1 k k

403 388 10. Unsupervised Learning Now, we would like to find an algorithm to solve (10.11)—that is, a K clusters such that the objective method to partition the observations into of (10.11) is minimized. This is in fact a very difficult problem to solve n ways to partition observations into K n K precisely, since there are almost and n are tiny! Fortunately, a very clusters. This is a huge number unless K simple algorithm can be shown to provide a local optimum—a pretty good K -means optimization problem (10.11). This approach is solution —to the laid out in Algorithm 10.1. K Algorithm 10.1 -Means Clustering 1. Randomly assign a number, from 1 to K , to each of the observations. These serve as initial cluster assignments for the observations. 2. Iterate until the cluster assignments stop changing: K (a) For each of the .The clusters, compute the cluster centroid p th cluster centroid is the vector of the feature means for the k th cluster. observations in the k (b) Assign each observation to the cluster whose centroid is closest closest is defined using Euclidean distance). (where ease the value of the objective Algorithm 10.1 is guaranteed to decr (10.11) at each step. To understand why, the following identity is illu- minating: p p ∑ ∑ ∑ ∑ 1 2 2 ′ ( , x ) (10.12) ( x ̄ − x =2 − x ) j i kj ij ij | | C k ′ j j =1 =1 ∈ i,i i ∈ C C k k ∑ 1 x where ̄ . C in cluster j is the mean for feature = x ij kj k ∈ i C C | | k k In Step 2(a) the cluster means for each feature are the constants that minimize the sum-of-squared deviations, and in Step 2(b), reallocating the observations can only improve (10.12). This means that as the algorithm is run, the clustering obtained will continually improve until the result no longer changes; the objective of (10.11) will never increase. When the result no longer changes, a has been reached. Figure 10.6 shows local optimum the progression of the algorithm on the toy example from Figure 10.5. K -means clustering derives its name from the fact that in Step 2(a), the cluster centroids are computed as the mean of the observations assigned to each cluster. Because the K -means algorithm finds a local rather than a global opti- mum, the results obtained will depend on the initial (random) cluster as- signment of each observation in Step 1 of Algorithm 10.1. For this reason, it is important to run the algorithm multiple times from different random

404 10.3 Clustering Methods 389 Data Step 1 Iteration 1, Step 2a Final Results Iteration 2, Step 2a Iteration 1, Step 2b FIGURE 10.6. The progress of the K-means algorithm on the example of Fig- K =3. Top left: ure 10.5 with Top center: in Step 1 the observations are shown. of the algorithm, each observation is randomly assigned to a cluster. Top right: in Step 2(a), the cluster centroids are computed. These are shown as large col- ored disks. Initially the centroids are almost completely overlapping because the initial cluster assignments were chosen at random. Bottom left: in Step 2(b), each observation is assigned to the nearest centroid. Bottom center: Step 2(a) is once again performed, leading to new cluster centroids. Bottom right: the results obtained after ten iterations. initial configurations. Then one selects the best solution, i.e. that for which the objective (10.11) is smallest. Figure 10.7 shows the local optima ob- tained by running K -means clustering six times using six different initial cluster assignments, using the toy data from Figure 10.5. In this case, the best clustering is the one with an objective value of 235.8. As we have seen, to perform -means clustering, we must decide how K many clusters we expect in the d ata. The problem of selecting K is far from simple. This issue, along with other practical considerations that arise in performing K -means clustering, is addressed in Section 10.3.3.

405 390 10. Unsupervised Learning 235.8 320.9 235.8 235.8 235.8 310.9 FIGURE 10.7. K -means clustering performed six times on the data from Fig- K ure 10.5 with , each time with a different random assignment of the ob- =3 servations in Step 1 of the K -means algorithm. Above each plot is the value of the objective (10.11). Three different local optima were obtained, one of which resulted in a smaller value of the objective and provides better separation between the clusters. Those labeled in red all achieved the same best solution, with an objective value of 235.8. 10.3.2 Hierarchical Clustering One potential disadvantage of K -means clustering is that it requires us to pre-specify the number of clusters K Hierarchical clustering is an alter- . native approach which does not require that we commit to a particular choice of . Hierarchical clustering has an added advantage over K -means K clustering in that it results in an attractive tree-based representation of the observations, called a dendrogram . In this section, we describe bottom-up or agglomerative clustering. bottom-up This is the most common type of hierarchical clustering, and refers to agglomerative the fact that a dendrogram (generally depicted as an upside-down tree; see

406 10.3 Clustering Methods 391 4 2 2 X 0 −2 2 0 −4 −2 −6 X 1 Forty-five observations generated in two-dimensional space. In FIGURE 10.8. reality there are three distinct classes, shown in separate colors. However, we will treat these class labels as unknown and will seek to cluster the observations in order to discover the classes from the data. Figure 10.9) is built starting from the leaves and combining clusters up to the trunk. We will begin with a discussion of how to interpret a dendrogram and then discuss how hierarchical clustering is actually performed—that is, how the dendrogram is built. Interpreting a Dendrogram We begin with the simulated data set shown in Figure 10.8, consisting of 45 observations in two-dimensional space. The data were generated from a three-class model; the true class lab els for each observation are shown in distinct colors. However, suppose that the data were observed without the class labels, and that we wanted to perform hierarchical clustering of the data. Hierarchical clustering (with complete linkage, to be discussed later) yields the result shown in the left-hand panel of Figure 10.9. How can we interpret this dendrogram? In the left-hand panel of Figure 10.9, each leaf of the dendrogram rep- resents one of the 45 observations in Figure 10.8. However, as we move up the tree, some leaves begin to fuse into branches. These correspond to observations that are similar to each other. As we move higher up the tree, branches themselves fuse, either with l eaves or other branches. The earlier (lower in the tree) fusions occur, the more similar the groups of observa- tions are to each other. On the other hand, observations that fuse later (near the top of the tree) can be quite different. In fact, this statement can be made precise: for any two observations, we can look for the point in the tree where branches containing those two observations are first fused. The height of this fusion, as measured on the vertical axis, indicates how

407 392 10. Unsupervised Learning 0246810 0246810 0246810 Left: dendrogram obtained from hierarchically clustering the data FIGURE 10.9. from Figure 10.8 with complete linkage and Euclidean distance. Center: the den- drogram from the left-hand panel, cut at a height of nine (indicated by the dashed line). This cut results in two distinct clusters, shown in different colors. Right: the dendrogram from the left-hand panel, now cut at a height of five. This cut results in three distinct clusters, shown in different colors. Note that the colors were not used in clustering, but are simply used for display purposes in this figure. different the two observations are. Thus, observations that fuse at the very bottom of the tree are quite similar to each other, whereas observations that fuse close to the top of the tree will tend to be quite different. This highlights a very important point in interpreting dendrograms that is often misunderstood. Consider the left-hand panel of Figure 10.10, which shows a simple dendrogram obtained from hierarchically clustering nine observations. One can see that observations 5 and 7 are quite similar to each other, since they fuse at the lowest point on the dendrogram. Obser- vations 1 and 6 are also quite similar to each other. However, it is tempting but incorrect to conclude from the figure that observations 9 and 2 are quite similar to each other on the basis that they are located near each other on the dendrogram. In fact, based on the information contained in the dendrogram, observation 9 is no more similar to observation 2 than it , 5 , is to observations 8 and 7. (This can be seen from the right-hand panel of Figure 10.10, in which the raw data are displayed.) To put it mathe- n − 1 n possible reorderings of the dendrogram, where matically, there are 2 n − 1pointswhere is the number of leaves. This is because at each of the fusions occur, the positions of the two fused branches could be swapped without affecting the meaning of the dendrogram. Therefore, we cannot draw conclusions about the similarity of two observations based on their proximity along the horizontal axis . Rather, we draw conclusions about the similarity of two observations based on the location on the vertical axis where branches containing those two observations first are fused.

408 10.3 Clustering Methods 393 9 3.0 0.5 2.5 2.0 7 0.0 2 8 5 X 1.5 3 9 −0.5 2 1.0 2 3 1 0.5 4 −1.0 6 8 0.0 1 6 5 7 4 −1.5 0.0 0.5 −1.5 −1.0 −0.5 1.0 X 1 An illustration of how to properly interpret a dendrogram with FIGURE 10.10. Left: a dendrogram generated using nine observations in two-dimensional space. 5 and 7 are quite similar Euclidean distance and complete linkage. Observations 1 and 6 . However, observation 9 is no more to each other, as are observations similar to observation 2 than it is to observations 8 , 5 , and 7 , even though obser- vations 9 2 are close together in terms of horizontal distance. This is because and 2 9 8 , 5 , and 7 all fuse with observation observations at the same height, approx- , imately 1 8 . Right: the raw data used to generate the dendrogram can be used to . 9 is no more similar to observation 2 than it is confirm that indeed, observation to observations 8 , 5 , and 7 . Now that we understand how to interpret the left-hand panel of Fig- ure 10.9, we can move on to the issue of identifying clusters on the basis of a dendrogram. In order to do this, we make a horizontal cut across the dendrogram, as shown in the center and right-hand panels of Figure 10.9. th the cut can be interpreted as clus- The distinct sets of observations benea ters. In the center panel of Figure 10.9, cutting the dendrogram at a height of nine results in two clusters, shown in distinct colors. In the right-hand panel, cutting the dendrogram at a height of five results in three clusters. Further cuts can be made as one descends the dendrogram in order to ob- tain any number of clusters, between 1 (corresponding to no cut) and n (corresponding to a cut at height 0, so that each observation is in its own cluster). In other words, the height of the cut to the dendrogram serves the same role as the in K -means clustering: it controls the number of K clusters obtained. Figure 10.9 therefore highlights a very attractive aspect of hierarchical clustering: one single dendrogram can be used to obtain any number of clusters. In practice, people often look at the dendrogram and select by eye a sensible number of clusters, based on the heights of the fusion and the number of clusters desired. In the case of Figure 10.9, one might choose to select either two or three clusters. However, often the choice of where to cut the dendrogram is not so clear.

409 394 10. Unsupervised Learning hierarchical refers to the fact that clusters obtained by cutting The term the dendrogram at a given height are necessarily nested within the clusters obtained by cutting the dendrogram at any greater height. However, on an arbitrary data set, this assumption of hierarchical structure might be unrealistic. For instance, suppose that our observations correspond to a group of people with a 50–50 split of males and females, evenly split among Americans, Japanese, and French. We can imagine a scenario in which the best division into two groups might split these people by gender, and the best division into three groups might split them by nationality. In this case, the true clusters are not nested, in the sense that the best division into three groups does not result from taking the best division into two groups and splitting up one of those groups. Consequently, this situation could not be well-represented by hierarchical clustering. Due to situations such as this one, hierarchical cluste ring can sometimes yield worse (i.e. less accurate) K results than -means clustering for a given number of clusters. The Hierarchical Clustering Algorithm The hierarchical clustering dendrogram is obtained via an extremely simple algorithm. We begin by defining some sort of measure between dissimilarity each pair of observations. Most often, Euclidean distance is used; we will discuss the choice of dissimilarity measure later in this chapter. The algo- rithm proceeds iteratively. Startin g out at the bottom of the dendrogram, each of the observations is treated as its own cluster. The two clusters n that are most similar to each other are then fused so that there now are n − 1 clusters. Next the two clusters that are most similar to each other are fused again, so that there now are n − 2 clusters. The algorithm proceeds in this fashion until all of the observations belong to one single cluster, and the dendrogram is complete. Figure 10.11 depicts the first few steps of the algorithm, for the data from Figure 10.9. To summarize, the hierarchical clustering algorithm is given in Algorithm 10.2. This algorithm seems simple enough, but one issue has not been ad- dressed. Consider the bottom right panel in Figure 10.11. How did we { 5 , 7 determine that the cluster should be fused with the cluster { 8 } ? } We have a concept of the dissimilarity between pairs of observations, but how do we define the dissimilarity between two clusters if one or both of the clusters contains multiple observations? The concept of dissimilarity between a pair of observations needs to be extended to a pair of groups of observations . This extension is achieved by developing the notion of linkage , which defines the dissimilarity between two groups of observa- linkage tions. The four most common types of linkage— complete , average , single , and centroid —are briefly described in Table 10.2. Average, complete, and single linkage are most popular among statisticians. Average and complete

410 10.3 Clustering Methods 395 Algorithm 10.2 Hierarchical Clustering 1. Begin with observations and a measure (such as Euclidean dis- n ) ( n 2 pairwise dissimilarities. Treat each = ( n − 1) / n tance) of all the 2 observation as its own cluster. i = n, n − 1 ,..., 2: 2. For i (a) Examine all pairwise inter-cluster dissimilarities among the clusters and identify the pair of clusters that are least dissimilar (that is, most similar). Fuse these two clusters. The dissimilarity between these two clusters indicates the height in the dendro- gram at which the fusion should be placed. (b) Compute the new pairwise inter-cluster dissimilarities among i − 1 remaining clusters. the Description Linkage Maximal intercluster dissimilarity. Compute all pairwise dis- similarities between the observations in cluster A and the Complete observations in cluster B, and record the of these largest dissimilarities. Minimal intercluster dissimilarity. Compute all pairwise dis- similarities between the observations in cluster A and the Single observations in cluster B, and record the smallest of these dissimilarities. Single linkage can result in extended, trailing clusters in which single observations are fused one-at-a-time. Mean intercluster dissimilarity. Compute all pairwise dis- similarities between the observations in cluster A and the Average observations in cluster B, and record the average of these dissimilarities. Dissimilarity between the centroid for cluster A (a mean Centroid vector of length p ) and the centroid for cluster B. Centroid linkage can result in undesirable . inversions TABLE 10.2. A summary of the four most commonly-used types of linkage in hierarchical clustering. linkage are generally preferred over single linkage, as they tend to yield more balanced dendrograms. Centroid linkage is often used in genomics, but suffers from a major drawback in that an inversion can occur, whereby inversion two clusters are fused at a height below either of the individual clusters in the dendrogram. This can lead to difficulties in visualization as well as in in- terpretation of the dendrogram. The dissimilarities computed in Step 2(b) of the hierarchical clustering algorithm will depend on the type of linkage used, as well as on the choice of dissimilarity measure. Hence, the resulting

411 396 10. Unsupervised Learning 9 9 0.5 0.5 7 7 0.0 0.0 2 2 8 8 5 5 X X 3 3 −0.5 −0.5 2 2 1 1 −1.0 −1.0 6 6 4 4 −1.5 −1.5 0.5 −0.5 −1.0 −1.5 1.0 −1.5 −1.0 0.0 −0.5 0.0 0.5 1.0 X X 1 1 9 9 0.5 0.5 7 7 0.0 0.0 2 2 8 8 5 5 X X 3 3 −0.5 −0.5 2 2 1 1 −1.0 −1.0 6 6 4 4 −1.5 −1.5 1.0 −0.5 0.0 −1.0 −1.5 0.5 1.0 0.0 −0.5 −1.0 −1.5 0.5 X X 1 1 An illustration of the first few steps of the hierarchical FIGURE 10.11. clustering algorithm, using the data from Figure 10.10, with complete linkage and Euclidean distance. Top Left: initially, there are nine distinct clusters, 1 } , { 2 } ,..., { 9 { . Top Right: the two clusters that are closest together, { 5 } and } { 7 } , are fused into a single cluster. Bottom Left: the two clusters that are closest together, { 6 } and { 1 } ,arefusedintoasinglecluster. Bottom Right: the two clus- , ters that are closest together using , { 8 } and the cluster { 5 , 7 } complete linkage are fused into a single cluster. dendrogram typically depends quite strongly on the type of linkage used, as is shown in Figure 10.12. Choice of Dissimilarity Measure Thus far, the examples in this chapter have used Euclidean distance as the dissimilarity measure. But sometimes other dissimilarity measures might be preferred. For example, correlation-based distance considers two obser- vations to be similar if their features are highly correlated, even though the observed values may be far apart in t erms of Euclidean distance. This is

412 10.3 Clustering Methods 397 Complete Linkage Average Linkage Single Linkage FIGURE 10.12. Average, complete, and single linkage applied to an example data set. Average and complete linkage tend to yield more balanced clusters. an unusual use of correlation, which is normally computed between vari- e observation profiles for each pair ables; here it is computed between th of observations. Figure 10.13 illustrates the difference between Euclidean and correlation-based distance. Correlation-based distance focuses on the shapes of observation profiles rather than their magnitudes. The choice of dissimilarity measure is very important, as it has a strong effect on the resulting dendrogram. In general, careful attention should be paid to the type of data being clustered and the scientific question at hand. These considerations should determine what type of dissimilarity measure is used for hierarchical clustering. For instance, consider an online retaile r interested in clu stering shoppers based on their past shopping histories. The goal is to identify subgroups similar shoppers, so that shoppers within each subgroup can be shown of items and advertisements that are particularly likely to interest them. Sup- pose the data takes the form of a matrix where the rows are the shoppers and the columns are the items available for purchase; the elements of the data matrix indicate the number of times a given shopper has purchased a given item (i.e. a 0 if the shopper has never purchased this item, a 1 if the shopper has purchased it once, etc.) What type of dissimilarity measure should be used to cluster the shoppers? If Euclidean distance is used, then shoppers who have bought very few item s overall (i.e. infrequent users of the online shopping site) will be clustered together. This may not be desir- able. On the other hand, if correlation-based distance is used, then shoppers with similar preferences (e.g. shoppers who have bought items A and B but

413 398 10. Unsupervised Learning 20 Observation 1 Observation 2 Observation 3 15 10 2 5 3 1 0 5101520 Variable Index Three observations with measurements on 20 variables are FIGURE 10.13. shown. Observations 1 and 3 have similar values for each variable and so there is a small Euclidean distance between them. But they are very weakly correlated, so they have a large correlation-based distance. On the other hand, observations 1 and 2 have quite different values for each variable, and so there is a large Euclidean distance between them. But they are highly correlated, so there is a small correlation-based distance between them. never items C or D) will be clustered together, even if some shoppers with these preferences are higher-volume shoppers than others. Therefore, for this application, correlation-based distance may be a better choice. In addition to carefully selecting the dissimilarity measure used, one must also consider whether or not the variables should be scaled to have stan- dard deviation one before the dissimilarity between the observations is computed. To illustrate this point, we continue with the online shopping example just described. Some items ma y be purchased more frequently than others; for instance, a shopper might buy ten pairs of socks a year, but a computer very rarely. High-frequenc y purchases like socks therefore tend to have a much larger effect on the inter-shopper dissimilarities, and hence on the clustering ultimately obtained, than rare purchases like computers. This may not be desirable. If the variables are scaled to have standard de- viation one before the inter-observation dissimilarities are computed, then each variable will in effect be given equal importance in the hierarchical clustering performed. We might also want to scale the variables to have sured on different scales; otherwise, standard deviation one if they are mea the choice of units (e.g. centimeters versus kilometers) for a particular vari- able will greatly affect the dissimilarity measure obtained. It should come as no surprise that whether or not it is a good decision to scale the variables before computing the dissimilarity measure depends on the application at hand. An example is shown in Figure 10.14. We note that the issue of whether or not to scale the variables before performing clustering applies to K -means clustering as well.

414 10.3 Clustering Methods 399 1.2 1500 1.0 0.8 1000 0.6 0.4 500 0.2 0246810 0 0.0 Socks Socks Computers Computers Socks Computers FIGURE 10.14. An eclectic online retailer sells two items: socks and computers. Left: the number of pairs of socks, and computers, purchased by eight online shop- pers is displayed. Each shopper is shown in a different color. If inter-observation dissimilarities are computed using Euclidean distance on the raw variables, then the number of socks purchased by an individual will drive the dissimilarities ob- tained, and the number of computers purchased will have little effect. This might be undesirable, since (1) computers are more expensive than socks and so the online retailer may be more interested in encouraging shoppers to buy computers than socks, and (2) a large difference in the number of socks purchased by two shoppers may be less informative about the shoppers’ overall shopping preferences than a small difference in the number of computers purchased. the same data Center: is shown, after scaling each variable by its standard deviation. Now the number of computers purchased will have a much greater effect on the inter-observation dissimilarities obtained. the same data are displayed, but now the y -axis Right: represents the number of dollars spent by each online shopper on socks and on computers. Since computers are much more expensive than socks, now computer purchase history will drive the inter-observation dissimilarities obtained. 10.3.3 Practical Issues in Clustering Clustering can be a very useful tool for data analysis in the unsupervised setting. However, there are a number of issues that arise in performing clustering. We describe some of these issues here. Small Decisions with Big Consequences In order to perform clustering , some decisions must be made. • Should the observations or features first be standardized in some way? For instance, maybe the variables should be centered to have mean zero and scaled to have standard deviation one.

415 400 10. Unsupervised Learning In the case of hierarchical clustering, • – What dissimilarity measure should be used? What type of linkage should be used? – – Where should we cut the dendrogram in order to obtain clusters? • K -means clustering, how many clusters should we look In the case of for in the data? Each of these decisions can have a strong impact on the results obtained. In practice, we try several different choices, and look for the one with the most useful or interpretable solution. With these methods, there is no single right answer—any solution that exposes some interesting aspects of the data should be considered. Validating the Clusters Obtained Any time clustering is performed on a data set we will find clusters. But we really want to know whether the clusters that have been found represent true subgroups in the data, or whether they are simply a result of clustering the noise . For instance, if we were to obtain an independent set of observa- tions, then would those observations also display the same set of clusters? This is a hard question to answer. There exist a number of techniques for assigning a p-value to a cluster in order to assess whether there is more evidence for the cluster than one would expect due to chance. However, there has been no consensus on a single best approach. More details can be found in Hastie et al. (2009). Other Considerations in Clustering Both -means and hierarchical clustering will assign each observation to K ght not be appropriate. For instance, a cluster. However, sometimes this mi suppose that most of the observations truly belong to a small number of (unknown) subgroups, and a small subset of the observations are quite different from each other and from all other observations. Then since K - means and hierarchical clustering force every observation into a cluster, the clusters found may be heavily distorted due to the presence of outliers that do not belong to any cluster. Mixture models are an attractive approach for accommodating the presence of such outliers. These amount to a soft version of K -means clustering, and are described in Hastie et al. (2009). In addition, clustering methods generally are not very robust to pertur- bations to the data. For instance, suppose that we cluster n observations, and then cluster the observations again after removing a subset of the n observations at random. One would hope that the two sets of clusters ob- tained would be quite similar, but often this is not the case!

416 10.4 Lab 1: Principal Components Analysis 401 A Tempered Approach to Interpreting the Results of Clustering We have described some of the issues associated with clustering. However, clustering can be a very useful and valid statistical tool if used properly. We mentioned that small decisions in how clustering is performed, such as how the data are standardized and what type of linkage is used, can have a large effect on the results. Therefore, we recommend performing clustering with different choices of these parameters, and looking at the full set of results in order to see what patterns consistently emerge. Since clustering can be non-robust, we recommend clustering subsets of the data in order to get a sense of the robustness of the clusters obtained. Most importantly, we must be careful about how the results of a clustering analysis are reported. These results should not be taken as the absolute truth about a data set. Rather, they should constitute a starting point for the development of a scientific hypothesis and further study, preferably on an independent data set. 10.4 Lab 1: Principal Components Analysis In this lab, we perform PCA on the USArrests data set, which is part of R package. The rows of the data set contain the 50 states, in the base alphabetical order. > states=row.names(USArrests) > states The columns of the data set contain the four variables. > names(USArrests) [1] "Murder" "Assault" "UrbanPop" "Rape" We first briefly examine the data. We notice that the variables have vastly different means. > apply(USArrests, 2, mean) Murder Assault UrbanPop Rape 7.79 170.76 65.54 21.23 Note that the apply() function allows us to apply a function—in this case, the mean() function—to each row or column of the data set. The second input here denotes whether we wish to compute the mean of the rows, 1, or the columns, 2. We see that there are on average three times as many rapes as murders, and more than eight times as many assaults as rapes. apply() We can also examine the variances of the four variables using the function. > apply(USArrests, 2, var) Murder Assault UrbanPop Rape 19.0 6945.2 209.5 87.7

417 402 10. Unsupervised Learning Not surprisingly, the variables also have vastly different variances: the UrbanPop variable measures the percentage of the population in each state living in an urban area, which is not a comparable number to the num- ber of rapes in each state per 100,000 individuals. If we failed to scale the variables before performing PCA, then most of the principal components Assault variable, since it has by that we observed would be driven by the far the largest mean and variance. Thus, it is important to standardize the variables to have mean zero and standard deviation one before performing PCA. func- prcomp() We now perform principal components analysis using the prcomp() tion, which is one of several functions in R that perform PCA. > pr.out=prcomp(USArrests, scale=TRUE) By default, the function centers the variables to have mean zero. prcomp() By using the option , we scale the variables to have standard scale=TRUE prcomp() contains a number of useful quan- deviation one. The output from tities. > names(pr.out) [1] "sdev" "rotation" "center" "scale" "x" The and scale components correspond to the means and standard center deviations of the variables that were used for scaling prior to implementing PCA. > pr.out$center Murder Assault UrbanPop Rape 7.79 170.76 65.54 21.23 > pr.out$scale Murder Assault UrbanPop Rape 4.36 83.34 14.47 9.37 The matrix provides the principal component loadings; each col- rotation pr.out$rotation contains the corresponding principal component umn of 2 loading vector. > pr.out$rotation PC1 PC2 PC3 PC4 Murder -0.536 0.418 -0 .341 0.649 Assault -0.583 0.188 -0 .268 -0.743 UrbanPop -0.278 -0.873 -0.378 0.134 Rape -0.543 -0.167 0.818 0.089 We see that there are four distinct principal components. This is to be n − 1 ,p ) informative principal expected because there are in general min( components in a data set with n observations and p variables. 2 This function names it the rotation matrix, because when we matrix-multiply the pr.out$rotation , it gives us the coordinates of the data in the rotated X matrix by coordinate system. These coordinates are the principal component scores.

418 10.4 Lab 1: Principal Components Analysis 403 prcomp() function, we do not need to explicitly multiply the Using the data by the principal component loading vectors in order to obtain the e vectors. Rather the 50 × 4matrix principal component scor has as its x columns the principal component score vectors. That is, the k th column is k th principal component score vector. the > dim(pr.out$x) [1] 50 4 We can plot the first two principal components as follows: > biplot(pr.out, scale=0) scale=0 argument to biplot() ensures that the arrows are scaled to The biplot() represent the loadings; other values for scale give slightly different biplots with different interpretations. Notice that this figure is a mirror image of Figure 10.1. Recall that the principal components are only unique up to a sign change, so we can reproduce Figure 10.1 by making a few small changes: > pr.out$rotation=-pr.out$rotation > pr.out$x=-pr.out$x > biplot(pr.out, scale=0) prcomp() function also outputs the standard deviation of each prin- The USArrests data set, we can access cipal component. For instance, on the these standard deviations as follows: > pr.out$sdev [1] 1.575 0.995 0.597 0.416 The variance explained by each principal component is obtained by squar- ing these: > pr.var=pr.out$sdev^2 >pr.var [1] 2.480 0.990 0.357 0.173 To compute the proportion of variance explained by each principal compo- nent, we simply divide the variance explained by each principal component by the total variance explained by all four principal components: > pve=pr.var/sum(pr.var) > pve [1] 0.6201 0.2474 0.0891 0.0434 We see that the first principal component explains 62.0 % of the variance in the data, the next principal component explains 24.7% of the variance, and so forth. We can plot the PVE explained by each component, as well as the cumulative PVE, as follows: > plot(pve, xlab="Principal Component", ylab="Proportion of Variance Explained", ylim=c(0,1),type=’b’) > plot(cumsum(pve), xlab="Principal Component", ylab=" Cumulative Proportion of Variance Explained", ylim=c(0,1), type=’b’)

419 404 10. Unsupervised Learning cumsum() com- The result is shown in Figure 10.4. Note that the function cumsum() putes the cumulative sum of the elements of a numeric vector. For instance: > a=c(1,2,8,-3) > cumsum(a) [1] 1 3 11 8 10.5 Lab 2: Clustering 10.5.1 K -Means Clustering kmeans() performs K -means clustering in The function .Webeginwith R kmeans() a simple simulated example in which there truly are two clusters in the data: the first 25 observations have a mean shift relative to the next 25 observations. > set.seed(2) > x=matrix(rnorm(50*2), ncol=2) > x[1:25,1]=x[1:25,1]+3 > x[1:25,2]=x [1:25,2]-4 We now perform K -means clustering with K =2. > km.out=kmeans(x,2,nstart=20) The cluster assignments of the 50 observations are contained in km.out$cluster . > km.out$cluster [1]22222222222222222222222221111 111111111111111111111 [30] K -means clustering perfectly separated the observations into two clus- The .We kmeans() ters even though we did not supply any group information to can plot the data, with each observati on colored according to its cluster assignment. > plot(x, col=(km.out$cluster +1), main="K-Means Clustering Results with K=2", xlab="", ylab="", pch=20, cex=2) Here the observations can be easily plotted because they are two-dimensional. s then we could instead perform PCA If there were more than two variable and plot the first two principal components score vectors. In this example, we knew that ther e really were two clusters because we generated the data. However, for real data, in general we do not know the true number of clusters. We could instead have performed K -means clustering on this example with K =3. > set.seed(4) > km.out=kmeans(x,3,nstart=20) >km.out K-means clustering with 3 clusters of sizes 10, 23, 17

420 10.5 Lab 2: Clustering 405 Cluster means: [,1] [,2] 1 2.3001545 -2.69622023 2 -0.3820397 -0.08740753 3 3.7789567 -4.56200798 Clustering vector: [1]31313331313131313333313332222 222222222222221212222 Within cluster sum of squares by cluster: [1] 19.56137 52.67700 25.74089 (between_SS / total_SS = 79.3 %) Available components : [1] "cluster" "centers" "totss" "withinss" "tot.withinss" "betweenss" "size" > plot(x, col=(km.out$cluster +1), main="K-Means Clustering Results with K=3", xlab="", ylab="", pch=20, cex=2) K =3, When -means clustering splits up the two clusters. K To run the function in with multiple initial cluster assign- kmeans() R ments, we use the argument. If a value of nstart nstart greater than one is used, then K -means clustering will be performed using multiple random kmeans() function will assignments in Step 1 of Algorithm 10.1, and the report only the best results. Here we compare using nstart=1 to . nstart=20 > set.seed(3) > km.out=kmeans(x,3,nstart=1) > km.out$tot.withinss [1] 104.3319 > km.out=kmeans(x,3,nstart=20) > km.out$tot.withinss [1] 97.9793 Note that is the total within-cluster sum of squares, km.out$tot.withinss which we seek to minimize by performing K -means clustering (Equation 10.11). The individual within-cluster sum-of-squares are contained in the . km.out$withinss vector strongly recommend always running K -means clustering with a large We nstart , such as 20 or 50, since otherwise an undesirable local value of optimum may be obtained. When performing K -means clustering, in addition to using multiple ini- tial cluster assignments, it is also important to set a random seed using the set.seed() function. This way, the initial cluster assignments in Step 1 can be replicated, and the K -means output will be fully reproducible.

421 406 10. Unsupervised Learning 10.5.2 Hierarchical Clustering The R .Inthefol- hclust() function implements hierarchical clustering in hclust() lowing example we use the data from Section 10.5.1 to plot the hierarchical clustering dendrogram using complete, single, and average linkage cluster- ing, with Euclidean distance as the dissimilarity measure. We begin by function is used dist() clustering observations using complete linkage. The dist() × 50 inter-observation Euclidean distance matrix. to compute the 50 > hc.complete=hclust(dist(x), method="complete") We could just as easily perform hierarchical clustering with average or single linkage instead: > hc.average=hclust(dist(x), method="average") > hc.single=hclust(dist(x), method="single") We can now plot the dendrograms obtained using the usual function. plot() The numbers at the bottom of the plot identify each observation. > par(mfrow=c(1,3)) > plot(hc.complete,main="Complete Linkage", xlab="", sub="", cex=.9) erage , main="Average Linkage", xlab="", sub="", > plot(hc.av cex=.9) > plot(hc.si ngle , main="Single Linkage", xlab="", sub="", cex=.9) To determine the cluster labels for each observation associated with a given cut of the dendrogram, we can use the function: cutree() cutree() > cutree(hc.complete, 2) [1]11111111111111111111111112222 [30] 222222222222222222222 average , 2) > cutree(hc. [1]11111111111111111111111112222 [30] 222122222222221212222 > cutree(hc. single , 2) [1]11111111111111121111111111111 [30] 111111111111111111111 For this data, complete and average linkage generally separate the observa- tions into their correct groups. However, single linkage identifies one point as belonging to its own cluster. A more sensible answer is obtained when four clusters are selected, althoug h there are still two singletons. > cutree(hc. single , 4) [1]11111111111111121111111113333 [30] 333333333333433333333 To scale the variables before performing hierarchical clustering of the observations, we use the scale() function: scale() > xsc=scale(x) > plot(hclust(dist(xsc), method="complete"), main=" Hierarchical Clustering with Scaled Features")

422 10.6 Lab 3: NCI60 Data Example 407 as.dist() func- Correlation-based distance can be computed using the as.dist() tion, which converts an arbitrary square symmetric matrix into a form that the hclust() function recognizes as a distance matrix. However, this only e features since the absolute corre- makes sense for data with at least thre lation between any two observations with measurements on two features is always 1. Hence, we will cluster a three-dimensional data set. > x=matrix(rnorm(30*3), ncol=3) > dd=as.dist(1-cor(t(x))) > plot(hclust(dd, method="complete"), main="Complete Linkage with Correlation -Based Distance", xlab="", sub="") 10.6 Lab 3: NCI60 Data Example ed in the analysis of genomic data. Unsupervised techniques are often us In particular, PCA and hierarchical clustering are popular tools. We illus- NCI60 cancer cell line microarray data, which trate these techniques on the consists of 6 , 830 gene expression measurements on 64 cancer cell lines. > library(ISLR) > nci.labs=NCI60$labs > nci.data=NCI60$data Each cell line is labeled with a cancer type. We do not make use of the cancer types in performing PCA and clu stering, as these are unsupervised techniques. But after performing PCA and clustering, we will check to see the extent to which these cancer types agree with the results of these unsupervised techniques. The data has 64 rows and 6 830 columns. , > dim(nci.data) [1] 64 6830 We begin by examining the cancer types for the cell lines. > nci.labs[1:4] [1] "CNS" "CNS" "CNS" "RENAL" > table(nci.labs) nci.labs BREAST CNS COLON K 562A- repro K562B- repro 75711 LEUKEMIA MCF7A-repro MCF7D-repro MELANOMA NSCLC 61189 OVARIAN PROSTATE RENAL UNKNOWN 6291

423 408 10. Unsupervised Learning 10.6.1 PCA on the NCI60 Data We first perform PCA on the data after scaling the variables (genes) to have standard deviation one, although one could reasonably argue that it is better not to scale the genes. > pr.out=prcomp(nci.data, scale=TRUE) We now plot the first few principal component score vectors, in order to visualize the data. The observations (cell lines) corresponding to a given cancer type will be plotted in the sa me color, so that we can see to what extent the observations within a can cer type are similar to each other. We first create a simple function that assigns a distinct color to each element of a numeric vector. The function will be used to assign a color to each of the 64 cell lines, based on the cancer type to which it corresponds. Cols=function(vec){ + cols=rainbow(length(unique(vec))) + return(cols[as.numeric(as.factor(vec))]) +} Note that the function takes as its argument a positive integer, rainbow() rainbow() and returns a vector containing that nu mber of distinct colors. We now can plot the principal comp onent score vectors. > par(mfrow=c(1,2)) > plot(pr.out$x[,1:2], col=Cols(nci.labs), pch=19, xlab="Z1",ylab="Z2") > plot(pr.out$x[,c(1,3)], col=Cols(nci.labs), pch=19, xlab="Z1",ylab="Z3") The resulting plots are shown in Figure 10.15. On the whole, cell lines corresponding to a single cancer type do tend to have similar values on the ectors. This indicates that cell lines first few principal component score v from the same cancer type tend to have pretty similar gene expression levels. We can obtain a summary of the proportion of variance explained (PVE) summary() method for a of the first few principal components using the prcomp object (we have truncated the printout): > summary(pr.out) Importance of components: PC1 PC2 PC3 PC4 PC5 Standard deviation 27.853 21.4814 19.8205 17.0326 15.9718 Proportion of Variance 0.114 0.0676 0.0575 0.0425 0.0374 Cumulative Proportion 0.114 0.1812 0.2387 0.2812 0.3185 Using the plot() function, we can also plot the variance explained by the first few principal components. > plot(pr.out) Note that the height of each bar in the bar plot is given by squaring the corresponding element of pr.out$sdev . However, it is more informative to

424 10.6 Lab 3: NCI60 Data Example 409 20 40 0 20 3 2 Z Z 0 −20 −40 −20 −60 −40 60 −20 40 −40 0 20 −20 −40 60 40 20 0 Z Z 1 1 Projections of the NCI60 cancer cell lines onto the first three FIGURE 10.15. principal components (in other words, the scores for the first three principal com- ponents). On the whole, observations belonging to a single cancer type tend to lie near each other in this low-dimensional space. It would not have been possible to visualize the data without using a dimension reduction method such as PCA, ) ( 6 , 830 possible scatterplots, none of since based on the full data set there are 2 which would have been particularly informative. plot the PVE of each principal component (i.e. a scree plot) and the cu- mulative PVE of each principal component. This can be done with just a little work. > pve=100*pr.out$sdev^2/sum(pr.out$sdev^2) > par(mfrow=c(1,2)) > plot(pve, type="o", ylab="PVE", xlab="Principal Component", col="blue") > plot(cumsum(pve), type="o", ylab="Cumulative PVE", xlab=" Principal Component", col="brown3") (Note that the elements of pve can also be computed directly from the sum- cumsum(pve) summary(pr.out)$importance[2,] , and the elements of mary, summary(pr.out)$importance[3,] .) The resulting plots are shown are given by in Figure 10.16. We see that together, the first seven principal components explain around 40 % of the variance in the data. This is not a huge amount of the variance. However, looking at the scree plot, we see that while each of the first seven principal components explain a substantial amount of variance, there is a marked decrease in the variance explained by further principal components. That is, there is an elbow in the plot after approx- imately the seventh principal component. This suggests that there may be little benefit to examining more than seven or so principal components (though even examining seven principal components may be difficult).

425 410 10. Unsupervised Learning 100 80 60 PVE Cumulative PVE 40 20 0246810 0 102030405060 0 102030405060 Principal Component Principal Component The PVE of the principal components of the NCI60 FIGURE 10.16. cancer cell line microarray data set. the PVE of each principal component is shown. Left: the cumulative PVE of the principal components is shown. Together, all Right: principal components explain 100 % of the variance. 10.6.2 Clustering the Observations of the NCI60 Data We now proceed to hierarchically cluster the cell lines in the NCI60 data, with the goal of finding out whether or not the observations cluster into distinct types of cancer. To begin, we standardize the variables to have mean zero and standard deviation one . As mentioned earlier, this step is optional and should be performed only if we want each gene to be on the same . scale > sd.data=scale(nci.data) We now perform hierarchical clustering of the observations using complete, single, and average linkage. Euclidean distance is used as the dissimilarity measure. > par(mfrow=c(1,3)) > data.dist=dist(sd.data) > plot(hclust(data.dist), labels=nci. labs, main="Complete Linkage", xlab="", sub="",ylab="") > plot(hclust(data.dist, method="average"), labels=nci.labs, main="Average Linkage", xlab="", sub="",ylab="") labs, > plot(hclust(data.dist, method="single"), labels=nci. main="Single Linkage", xlab="", sub="",ylab="") The results are shown in Figure 10.17. We see that the choice of linkage certainly does affect the results obtained. Typically, single linkage will tend to yield trailing clusters: very large clusters onto which individual observa- tions attach one-by-one. On the other hand, complete and average linkage tend to yield more balanced, attractive clusters. For this reason, complete and average linkage are generally preferred to single linkage. Clearly cell lines within a single cancer type do tend to cluster together, although the

426 10.6 Lab 3: NCI60 Data Example 411 Complete Linkage 160 120 80 NSCLC RENAL CNS CNS NSCLC NSCLC BREAST COLON LEUKEMIA CNS COLON COLON BREAST BREAST BREAST NSCLC NSCLC NSCLC RENAL OVARIAN OVARIAN RENAL 40 OVARIAN NSCLC CNS CNS LEUKEMIA LEUKEMIA RENAL RENAL COLON COLON RENAL RENAL NSCLC NSCLC RENAL RENAL PROSTATE OVARIAN MELANOMA COLON COLON OVARIAN P R O S TAT E MELANOMA MELANOMA MELANOMA MELANOMA MELANOMA LEUKEMIA LEUKEMIA MELANOMA MELANOMA OVARIAN BREAST LEUKEMIA UNKNOWN BREAST BREAST K562A−repro K562B−repro MCF7A−repro MCF7D−repro Average Linkage 100 120 RENAL CNS NSCLC CNS 80 NSCLC BREAST NSCLC LEUKEMIA COLON BREAST BREAST COLON COLON BREAST LEUKEMIA CNS OVARIAN OVARIAN 60 NSCLC NSCLC NSCLC OVARIAN LEUKEMIA RENAL NSCLC COLON RENAL CNS CNS RENAL RENAL COLON MELANOMA PROSTATE RENAL RENAL NSCLC NSCLC OVARIAN RENAL RENAL COLON COLON 40 MELANOMA MELANOMA OVARIAN P R O S TAT E MELANOMA MELANOMA MELANOMA LEUKEMIA LEUKEMIA MELANOMA MELANOMA OVARIAN UNKNOWN LEUKEMIA BREAST BREAST BREAST K562A−repro K562B−repro MCF7A−repro MCF7D−repro Single Linkage 100 CNS RENAL CNS 80 BREAST NSCLC NSCLC LEUKEMIA NSCLC COLON BREAST OVARIAN CNS COLON BREAST LEUKEMIA RENAL NSCLC LEUKEMIA BREAST NSCLC OVARIAN 60 COLON OVARIAN NSCLC RENAL RENAL CNS CNS COLON NSCLC NSCLC RENAL RENAL NSCLC OVARIAN RENAL RENAL RENAL COLON PROSTATE MELANOMA COLON COLON MELANOMA MELANOMA MELANOMA MELANOMA OVARIAN PROSTATE LEUKEMIA LEUKEMIA 40 MELANOMA MELANOMA MELANOMA OVARIAN UNKNOWN BREAST LEUKEMIA BREAST BREAST K562A−repro K562B−repro MCF7A−repro MCF7D−repro FIGURE 10.17. The NCI60 cancer cell line microarray data, clustered with av- erage, complete, and single linkage, and using Euclidean distance as the dissim- ilarity measure. Complete and average linkage tend to yield evenly sized clusters whereas single linkage tends to yield extended clusters to which single leaves are fused one by one.

427 412 10. Unsupervised Learning clustering is not perfect. We will use complete linkage hierarchical cluster- ing for the analysis that follows. We can cut the dendrogram at the height that will yield a particular number of clusters, say four: > hc.out=hclust(dist(sd.data)) > hc.clusters=cutree(hc.out,4) > table(hc.clusters,nci.labs) There are some clear patterns. All the leukemia cell lines fall in cluster 3, while the breast cancer cell lines are spr ead out over three different clusters. We can plot the cut on the dendrogram that produces these four clusters: > par(mfrow=c(1,1)) > plot(hc.out, labels=nci.labs) > abline(h=139, col="red") abline() The function draws a straight line on top of any existing plot in . The argument plots a horizontal line at height 139 on the den- h=139 R drogram; this is the height that results in four distinct clusters. It is easy to verify that the resulting clusters are the same as the ones we obtained cutree(hc.out,4) . using hclust gives a useful brief summary of the object: Printing the output of >hc.out Call: hclust(d = dist(dat)) Cluster method : complete Distance : euclidean Number of objects: 64 We claimed earlier in Section 10.3.2 that -means clustering and hier- K archical clustering with the dendrogram cut to obtain the same number NCI60 hierarchical of clusters can yield very different results. How do these K -means clustering clustering results compare to what we get if we perform with K =4? > set.seed(2) > km.out=kmeans(sd.data, 4, nstart=20) > km.clusters=km.out$cluster > table(km.clusters,hc.clusters) hc.clusters km.clusters 1234 111009 20080 39000 420700 We see that the four clusters obtained using hierarchical clustering and K - means clustering are somewhat different. Cluster 2 in K -means clustering is identical to cluster 3 in hierarchical c lustering. However, the other clusters

428 10.7 Exercises 413 -means clustering contains a portion of differ: for instance, cluster 4 in K hierarchical clustering, as well as the observations assigned to cluster 1 by all of the observations assigned to cluster 2 by hierarchical clustering. Rather than performing hierarchical clustering on the entire data matrix, ustering on the first few principal we can simply perform hierarchical cl component score vectors, as follows: > hc.out=hclust(dist(pr.out$x[,1:5])) main="Hier. Clust. on First > plot(hc.out, labels=nci. labs, Five Score Vectors") > table(cutree(hc.out,4), nci.labs) Not surprisingly, these results are d ifferent from the ones that we obtained when we performed hierarchical clustering on the full data set. Sometimes performing clustering on the first few principal component score vectors can give better results than performing clustering on the full data. In this situation, we might view the principal component step as one of denois- ing the data. We could also perform K -means clustering on the first few principal component score vectors rather than the full data set. 10.7 Exercises Conceptual K -means clustering algorithm. 1. This problem involves the (a) Prove (10.12). (b) On the basis of this identity, argue that the -means clustering K algorithm (Algorithm 10.1) decreases the objective (10.11) at each iteration. 2. Suppose that we have four observations, for which we compute a dissimilarity matrix, given by ⎡ ⎤ 30 . . 40 . 7 0 ⎢ ⎥ . 30 . 50 . 8 0 ⎢ ⎥ . ⎣ ⎦ 0 45 . . 50 . 40 . 80 . 70 . 0 45 For instance, the dissimilarity between the first and second obser- vations is 0.3, and the dissimilarity between the second and fourth observations is 0.8. (a) On the basis of this dissimilarity matrix, sketch the dendrogram that results from hierarchically clustering these four observa- tions using complete linkage. Be sure to indicate on the plot the height at which each fusion occurs, as well as the observations corresponding to each leaf in the dendrogram.

429 414 10. Unsupervised Learning (b) Repeat (a), this time using single linkage clustering. (c) Suppose that we cut the dendogram obtained in (a) such that two clusters result. Which obs ervations are in each cluster? (d) Suppose that we cut the dendogram obtained in (b) such that ervations are in each cluster? two clusters result. Which obs (e) It is mentioned in the chapter that at each fusion in the den- drogram, the position of the two clusters being fused can be swapped without changing the meaning of the dendrogram. Draw a dendrogram that is equivalent to the dendrogram in (a), for which two or more of the leaves are repositioned, but for which the meaning of the dendrogram is the same. 3. In this problem, you will perform K -means clustering manually, with =2,onasmallexamplewith n = 6 observations and p =2 K features. The observations are as follows. Obs. X X 2 1 14 1 2 13 04 3 4 51 5 62 40 6 (a) Plot the observations. (b) Randomly assign a cluster label to each observation. You can use the sample() command in R to do this. Report the cluster labels for each observation. (c) Compute the centroid for each cluster. (d) Assign each observation to the centroid to which it is closest, in terms of Euclidean distance. Report the cluster labels for each observation. (e) Repeat (c) and (d) until the answers obtained stop changing. (f) In your plot from (a), color the observations according to the cluster labels obtained. 4. Suppose that for a particular data set, we perform hierarchical clus- tering using single linkage and using complete linkage. We obtain two dendrograms. (a) At a certain point on the single linkage dendrogram, the clus- ters { 1 , 2 , 3 } and { 4 , 5 } fuse. On the complete linkage dendro- and gram, the clusters 1 , 2 , 3 } { { 4 , 5 } also fuse at a certain point. Which fusion will occur higher on the tree, or will they fuse at the same height, or is there not enough information to tell?

430 10.7 Exercises 415 (b) At a certain point on the single linkage dendrogram, the clusters { and { 6 } fuse. On the complete linkage dendrogram, the clus- 5 } also fuse at a certain point. Which fusion will 5 and { 6 } { ters } occur higher on the tree, or will they fuse at the same height, or is there not enough information to tell? 5. In words, describe the results that you would expect if you performed -means clustering of the eight shoppers in Figure 10.14, on the K K basis of their sock and computer purchases, with =2.Givethree answers, one for each of the variable scalings displayed. Explain. 6. A researcher collects expression measurements for 1,000 genes in 100 , 000 × 100 matrix, tissue samples. The data can be written as a 1 X , in which each row represents a gene and each col- which we call umn a tissue sample. Each tissue sample was processed on a different day, and the columns of X are ordered so that the samples that were processed earliest are on the left, and the samples that were processed later are on the right. The tissue samples belong to two groups: con- trol (C) and treatment (T). The C and T samples were processed in a random order across the days. The researcher wishes to deter- mine whether each gene’s expression measurements differ between the treatment and co ntrol groups. As a pre-analysis (before comparing T versus C), the researcher per- forms a principal component analysis of the data, and finds that the first principal component (a vector of length 100) has a strong linear trend from left to right, and explains 10 % of the variation. The re- searcher now remembers that each patient sample was run on one of two machines, A and B, and machine A was used more often in the often later. The researcher has earlier times while B was used more a record of which sample was run on which machine. (a) Explain what it means that the first principal component “ex- plains 10 % of the variation”. i, j )th element of X with (b) The researcher decides to replace the ( − z φ x i 1 ij j 1 where is the i th score, and φ z is the j th loading, for the first i 1 1 j principal component. He will then perform a two-sample t-test on each gene in this new data set in order to determine whether its expression differs between the two conditions. Critique this idea, and suggest a better approach. (c) Design and run a small simulation experiment to demonstrate the superiority of your idea.

431 416 10. Unsupervised Learning Applied 7. In the chapter, we mentioned the use of correlation-based distance and Euclidean distance as dissimilarity measures for hierarchical clus- tering. It turns out that these two measures are almost equivalent: if each observation has been centered to have mean zero and standard th denote the correlation between the i deviation one, and if we let r ij j th observations, then the quantity 1 − r and is proportional to the ij squared Euclidean distance between the th and j th observations. i On the USArrests data, show that this proportionality holds. dist() func- Hint: The Euclidean distance can be calculated using the tion, and correlations can be calculated using the function. cor() 8. In Section 10.2.3, a formula for calculating PVE was given in Equa- tion 10.8. We also saw that the PVE can be obtained using the sdev output of the prcomp() function. On the USArrests data, calculate PVE in two ways: (a) Using the sdev prcomp() function, as was done in output of the Section 10.2.3. prcomp() (b) By applying Equation 10.8 directly. That is, use the function to compute the principal component loadings. Then, use those loadings in Equation 10.8 to obtain the PVE. These two approaches should give the same results. Hint: You will only obtain the same results in (a) and (b) if the same data is used in both cases. For instance, if in (a) you performed prcomp() using centered and scaled variables, then you must center and scale the variables before applying Equation 10.3 in (b). data. We will now perform hierarchical clus- USArrests 9. Consider the tering on the states. (a) Using hierarchical clustering with complete linkage and Euclidean distance, cluster the states. (b) Cut the dendrogram at a height that results in three distinct clusters. Which states belong to which clusters? (c) Hierarchically cluster the states using complete linkage and Eu- clidean distance, after scaling the variables to have standard de- viation one . (d) What effect does scaling the variables have on the hierarchical clustering obtained? In your opinion, should the variables be scaled before the inter-observati on dissimilarities are computed? Provide a justification for your answer.

432 10.7 Exercises 417 10. In this problem, you will generate simulated data, and then perform PCA and K -means clustering on the data. (a) Generate a simulated data set with 20 observations in each of three classes (i.e. 60 observations total), and 50 variables. Hint: There are a number of functions in that you can use to R function; runif() is rnorm() generate data. One example is the another option. Be sure to add a mean shift to the observations in each class so that there are three distinct classes. (b) Perform PCA on the 60 observations and plot the first two prin- cipal component score vectors. Use a different color to indicate the observations in each of the three classes. If the three classes appear separated in this plot, then continue on to part (c). If not, then return to part (a) and modify the simulation so that there is greater separation be tween the three classes. Do not continue to part (c) until the three classes show at least some separation in the first two principal component score vectors. (c) Perform -means clustering of the observations with K =3. K How well do the clusters that you obtained in K -means cluster- ing compare to the true class labels? table() function in R to compare the true Hint: You can use the class labels to the class labels obtained by clustering. Be careful how you interpret the results: K -means clustering will arbitrarily number the clusters, so you cannot simply check whether the true class labels and clustering labels are the same. (d) Perform -means clustering with K = 2. Describe your results. K K = 4, and describe your K (e) Now perform -means clustering with results. (f) Now perform -means clustering with K =3onthefirsttwo K principal component score vectors, rather than on the raw data. That is, perform K -means clustering on the 60 × 2matrixof which the first column is the first principal component score the second principal component vector, and the second column is score vector. Comment on the results. scale() function, perform K -means clustering with (g) Using the K = 3 on the data after scaling each variable to have standard deviation one . How do these results co mpare to those obtained in (b)? Explain. 11. On the book website, www.StatLearning.com, there is a gene expres- sion data set ( Ch10Ex11.csv ) that consists of 40 tissue samples with measurements on 1,000 genes. The first 20 samples are from healthy patients, while the second 20 are from a diseased group.

433 418 10. Unsupervised Learning (a) Load in the data using read.csv() . You will need to select header=F . (b) Apply hierarchical clustering to the samples using correlation- based distance, and plot the dendrogram. Do the genes separate the samples into the two groups? Do your results depend on the type of linkage used? (c) Your collaborator wants to know which genes differ the most across the two groups. Suggest a way to answer this question, and apply it here.

434 Index backfitting, 284, 300 C , 78, 205, 206, 210–213 p 2 backward stepwise selection, 79, R , 68–71, 79–80, 103, 212 208–209, 247  norm, 216 2 bagging, 12, 26, 303, 316–319,  norm, 219 1 328–330 baseline, 86 additive, 12, 86–90, 104 basis function, 270, 273 additivity, 282, 283 2 Bayes adjusted R , 78, 205, 206, classifier, 37–40, 139 210–213 decision boundary, 140 data set, 15, 16, Advertising error, 37–40 20, 59, 61–63, 68, 69, 71–76, 79, 81, 82, 87, Bayes’ theorem, 138, 139, 226 88, 102–104 Bayesian, 226–227 agglomerative clustering, 390 Bayesian information criterion, 78, 205, 206, 210–213 Akaike information criterion, 78, 205, 206, 210–213 best subset selection, 205, 221, 244–247 alternative hypothesis, 67 bias, 33–36, 65, 82 analysis of variance, 290 bias-variance area under the curve, 147 decomposition, 34 argument, 42 trade-off, 33–37, 42, 105, AUC, 147 149, 217, 230, 239, 243, data set, 14, 48, 49, 56, Auto 278, 307, 347, 357 90–93, 121, 122, 171, binary, 28, 130 176–178, 180, 182, 191, biplot, 377, 378 193–195, 299, 371 G. James et al., 419 , An Introduction to Statistical Learning: with Applications in R Springer Texts in Statistics 103, DOI 10.1007/978-1-4614-7138-7, © Springer Science+Business Media New York 2013

435 Index 420 Boolean, 159 cross-validation, 12, 33, 36, 175–186, 205, 227, boosting, 12, 25, 26, 303, 316, 248–251 321–324, 330–331 -fold, 181–184 k bootstrap, 12, 175, 187–190, 316 leave-one-out, 178–181 Boston data set, 14, 56, 110, curse of dimensionality, 108, 168, 113, 126, 173, 201, 264, 242–243 299, 327, 328, 330, 333 bottom-up clustering, 390 boxplot, 50 data frame, 48 branch, 305 Data sets , 15, 16, 20, 59, Advertising 61–63, 68, 69, 71–76, Caravan data set, 14, 164, 335 79, 81, 82, 87, 88, data set, 14, 117, 123, Carseats 102–104 324, 333 , 14, 48, 49, 56, 90–93, Auto categorical, 3, 28 121, 122, 171, 176–178, classification, 3, 12, 28–29, 180, 182, 191, 193–195, 37–42, 127–173, 299, 371 337–353 Boston , 14, 56, 110, 113, error rate, 311 126, 173, 201, 264, 299, tree, 311–314, 324–327 327, 328, 330, 333 classifier, 127 , 14, 164, 335 Caravan cluster analysis, 26–28 , 14, 117, 123, 324, Carseats clustering, 4, 26–28, 385–401 333 K -means, 12, 386–389 , 14, 54, 263, 300 College agglomerative, 390 Credit , 83, 84, 86, 89, 90, bottom-up, 390 99–102 hierarchical, 386, 390–401 , 14, 128–137, Default coefficient, 61 144–148, 198, 199 College data set, 14, 54, 263, , 312, 313, 317–320, Heart 300 354, 355 collinearity, 99–103 , 14, 244, 251, 255, Hitters conditional probability, 37 256, 304, 305, 310, 311, confidence interval, 66–67, 81, 334 82, 103, 268 Income , 16–18, 22–24 confounding, 136 , 14, 366 Khan confusion matrix, 145, 158 , 4, 5, 14, 407, NCI60 continuous, 3 409–412 contour plot, 46 OJ , 14, 334, 371 contrast, 86 Portfolio , 14, 194 correlation, 70, 74, 396 Smarket , 3, 14, 154, 161, Credit data set, 83, 84, 86, 89, 162, 171 90, 99–102 USArrests , 14, 377, 378, 381–383 cross-entropy, 311–312, 332

436 421 Index false positive, 147 Wage , 1, 2, 9, 10, 14, 267, 269, 271, 272, 274–277, false positive rate, 147, 149, 354 280, 281, 283, 284, 286, feature, 15 287, 299 feature selection, 204 Weekly , 14, 171, 200 Fisher’s linear discriminant, 141 decision tree, 12, 303–316 fit, 21 Default data set, 14, 128–137, fitted value, 93 144–148, 198, 199 flexible, 22 degrees of freedom, 32, 241, 271, for loop, 193 272, 278 forward stepwise selection, 78, dendrogram, 386, 390–396 207–208, 247 density function, 138 function, 42 dependent variable, 15 Gaussian (normal) distribution, derivative, 272, 278 138, 139, 142–143 deviance, 206 generalized additive model, 6, dimension reduction, 204, 26, 265, 266, 282–287, 228–238 294 discriminant function, 141 generalized linear model, 6, 156, dissimilarity, 396–398 192 distance Gini index, 311–312, 319, 332 correlation-based, 396–398, 416 data set, 312, 313, Heart Euclidean, 379, 387, 388, 317–320, 354, 355 394, 396–398 heatmap, 47 double-exponential distribution, heteroscedasticity, 95–96 227 hierarchical clustering, 390–396 dummy variable, 82–86, 130, dendrogram, 390–394 134, 269 inversion, 395 linkage, 394–396 effective degrees of freedom, 278 hierarchical principle, 89 elbow, 409 high-dimensional, 78, 208, 239 error hinge loss, 357 irreducible, 18, 32 histogram, 50 rate, 37 Hitters data set, 14, 244, 251, reducible, 18 255, 256, 304, 305, 310, term, 16 311, 334 Euclidean distance, 379, 387, hold-out set, 176 388, 394, 396–398, 416 hyperplane, 338–343 expected value, 19 hypothesis test, 67–68, 75, 95 exploratory data analysis, 374 data set, 16–18, 22–24 Income F-statistic, 75 independent variable, 15 factor, 84 indicator function, 268 false discovery proportion, 147 inference, 17, 19 false negative, 147

437 422 Index linkage, 394–396, 410 inner product, 351 average, 394–396 input variable, 15 centroid, 394–396 integral, 278 complete, 391, 394–396 interaction, 60, 81, 87–90, 104, single, 394–396 286 local regression, 266, 294 intercept, 61, 63 logistic interpretability, 203 function, 132 inversion, 395 logistic regression, 6, 12, 26, 127, irreducible error, 18, 39, 82, 103 131–137, 286–287, 349, K-means clustering, 12, 386–389 356–357 K-nearest neighbors multiple, 135–137 classifier, 12, 38–40, 127 logit, 132, 286, 291 regression, 104–109 loss function, 277, 357 kernel, 350–353, 356, 367 low-dimensional, 238 linear, 352 main effects, 88, 89 non-linear, 349–353 majority vote, 317 polynomial, 352, 354 , 78, 205, 206, C Mallow’s p radial, 352–354, 363 210–213 kernel trick, 351 margin, 341, 357 data set, 14, 366 Khan matrix multiplication, 12 knot, 266, 271, 273–275 maximal margin classifier, 337–343 Laplace distribution, 227 hyperplane, 341 lasso, 12, 25, 219–227, 241–242, maximum likelihood, 132–133, 309, 357 135 leaf, 305, 391 mean squared error, 29 least squares, 6, 21, 61–63, 133, misclassification error, 37 203 missing data, 49 line, 63 mixed selection, 79 weighted, 96 model assessment, 175 level, 84 model selection, 175 leverage, 97–99 multicollinearity, 243 likelihood function, 133 multivariate Gaussian, 142–143 linear, 2, 86 multivariate normal, 142–143 linear combination, 121, 204, 229, 375 natural spline, 274, 278, 293 linear discriminant analysis, 6, NCI60 data set, 4, 5, 14, 407, 12, 127, 130, 138–147, 409–412 348, 354 negative predictive value, 147, linear kernel, 352 149 linear model, 20, 21, 59 node linear regression, 6, 12 internal, 305 multiple, 71–82 purity, 311–312 simple, 61–71 terminal, 305

438 Index 423 posterior noise, 22, 228 distribution, 226 non-linear, 2, 12, 265–301 mode, 226 decision boundary, 349–353 probability, 139 kernel, 349–353 power, 101, 147 non-parametric, 21, 23–24, precision, 147 104–109, 168 prediction, 17 normal (Gaussian) distribution, interval, 82, 103 138, 139, 142–143 predictor, 15 null, 145 principal components, 375 hypothesis, 67 analysis, 12, 230–236, model, 78, 205, 220 374–385 loading vector, 375, 376 odds, 132, 170 proportion of variance OJ data set, 14, 334, 371 explained, 382–384, 408 one-standard-error rule, 214 regression, 12, 230–236, one-versus-all, 356 256–257, 374–375, 385 one-versus-one, 355 score vector, 376 optimal separating hyperplane, scree plot, 383–384 341 prior optimism of training error, 32 distribution, 226 ordered categorical variable, 292 probability, 138 orthogonal, 233, 377 projection, 204 basis, 288 pruning, 307–309 out-of-bag, 317–318 cost complexity, 307–309 outlier, 96–97 weakest link, 307–309 output variable, 15 overfitting, 22, 24, 26, 32, 80, quadratic, 91 144, 207, 341 quadratic discriminant analysis, 4, 149–150 p-value, 67–68, 73 qualitative, 3, 28, 127, 176 parameter, 61 variable, 82–86 parametric, 21–23, 104–109 quantitative, 3, 28, 127, 176 partial least squares, 12, 230, 237–238, 258, 259 R functions 2 path algorithm, 224 x , 125 perpendicular, 233 abline() , 112, 122, 301, polynomial 412 kernel, 352, 354 , 116, 290, 291 anova() regression, 90–92, 265–268, , 250, 401 apply() 271 , 407 as.dist() population regression line, 63 ,50 as.factor() data set, 14, 194 Portfolio attach() ,50 positive predictive value, 147, biplot() , 403 149 boot() , 194–196, 199

439 424 Index , 293, 300 bs() lm() , 110, 112, 113, 115, c() ,43 116, 121, 122, 156, 161, , 164, 289 cbind() 191, 192, 254, 256, 288, coef() , 111, 157, 247, 251 294, 324 confint() , 111 lo() , 296 contour() ,46 loadhistory() ,51 , 118, 157 contrasts() , 294 loess() cor() , 44, 122, 155, 416 ls() ,43 , 404 cumsum() ,44 matrix() mean() , 45, 158, 191, 401 cut() , 292 median() , 171 , 406 cutree() , 251 model.matrix() , 192, 193, 199 cv.glm() , 49, 244 na.omit() , 254 cv.glmnet() names() , 49, 111 cv.tree() , 326, 328, 334 ns() , 293 , 171, 201, data.frame() pairs() , 50, 55 262, 324 , 112, 289 par() ,46 dev.off() , 256, 258 pcr() , 48, 49 dim() pdf() ,46 , 406, 416 dist() persp() ,47 fix() , 48, 54 , 45, 46, 49, 55, 112, plot() , 193 for() 122, 246, 295, 325, 360, , 284, 294, 296 gam() 371, 406, 408 gbm() , 330 , 295 plot.gam() glm() , 156, 161, 192, 199, plot.svm() , 360 291 plsr() , 258 glmnet() , 251, 253–255 points() , 246 hatvalues() , 113 poly() , 116, 191, 288–290, , 406, 407 hclust() 299 , 50, 55 hist() prcomp() , 402, 403, 416 I() , 115, 289, 291, 296 predict() , 111, 157, 160, identify() ,50 162, 163, 191, 249, 250, , 324 ifelse() 252, 253, 289, 291, 292, ,46 image() 296, 325, 327, 361, 364, importance() , 330, 333, 365 334 , 172 print() , 244 is.na() prune.misclass() , 327 jitter() , 292 prune.tree() , 328 jpeg() ,46 q() ,51 kmeans() , 404, 405 , 162 qda() , 163, 164 knn() , 201 quantile() lda() , 161, 162 rainbow() , 408 , 125 legend() randomForest() , 329 length() ,43 range() ,56 , 109, 110 library() read.csv() , 49, 54, 418 , 112 lines()

440 Index 425 , 48, 49 read.table() reducible error, 18, 81 regsubsets() , 244–249, 262 regression, 3, 12, 28–29 , 112 residuals() local, 265, 266, 280–282 return() , 172 piecewise polynomial, 271 ,43 rm() polynomial, 265–268, rnorm() , 44, 45, 124, 262, 276–277 417 spline, 266, 270, 293 , 112 rstudent() tree, 304–311, 327–328 runif() , 417 regularization, 204, 215 , 294 s() replacement, 189 sample() , 191, 194, 414 resampling, 175–190 savehistory() ,51 residual, 62, 72 scale() , 165, 406, 417 plot, 92 ,45 sd() standard error, 66, 68–69, ,46 seq() 79–80, 102 , 45, 191, 405 set.seed() studentized, 97 , 293, 294 smooth.spline() sum of squares, 62, 70, 72 sqrt() , 44, 45 residuals, 239, 322 sum() , 244 response, 15 , 51, 55, 113, 121, summary() ridge regression, 12, 215–219, 122, 157, 196, 199, 244, 357 245, 256, 257, 295, 324, robust, 345, 348, 400 325, 328, 330, 334, 360, ROC curve, 147, 354–355 361, 363, 372, 408 rug plot, 292 , 359–363, 365, 366 svm() scale invariant, 217 table() , 158, 417 scatterplot, 49 , 325 text() scatterplot matrix, 50 title() , 289 scree plot, 383–384, 409 tree() , 304, 324 elbow, 384 , 361, 364, 372 tune() seed, 191 update() , 114 semi-supervised learning, 28 var() ,45 sensitivity, 145, 147 varImpPlot() , 330 separating hyperplane, 338–343 , 114 vif() shrinkage, 204, 215 , 113, 246 which.max() penalty, 215 , 246 which.min() signal, 228 write.table() ,48 slack variable, 346 radial kernel, 352–354, 363 slope, 61, 63 random forest, 12, 303, 316, data set, 3, 14, 154, Smarket 320–321, 328–330 161, 162, 171 recall, 147 receiver operating characteristic smoother, 286 smoothing spline, 266, 277–280, (ROC), 147, 354–355 293 recursive binary splitting, 306, soft margin classifier, 343–345 309, 311

441 Index 426 soft-thresholding, 225 tree, 303–316 sparse, 219, 228 tree-based method, 303 sparsity, 219 true negative, 147 specificity, 145, 147, 148 true positive, 147 spline, 265, 271–280 true positive rate, 147, 149, 354 cubic, 273 truncated power basis, 273 linear, 273 tuning parameter, 215 natural, 274, 278 Type I error, 147 regression, 266, 271–277 Type II error, 147 smoothing, 31, 266, 277–280 unsupervised learning, 26–28, thin-plate, 23 230, 237, 373–413 standard error, 65, 93 USArrests data set, 14, 377, standardize, 165 378, 381–383 statistical model, 1 step function, 105, 265, 268–270 validation set, 176 stepwise model selection, 12, approach, 176–178 205, 207 variable, 15 stump, 323 dependent, 15 subset selection, 204–214 dummy, 82–86, 89–90 subtree, 308 importance, 319, 330 supervised learning, 26–28, 237 independent, 15 support vector, 342, 347, 357 indicator, 37 classifier, 337, 343–349 input, 15 machine, 12, 26, 349–359 output, 15 regression, 358 qualitative, 82–86, 89–90 synergy, 60, 81, 87–90, 104 selection, 78, 204, 219 systematic, 16 variance, 19, 33–36 t-distribution, 67, 153 inflation factor, 101–103, t-statistic, 67 114 test varying coefficient model, 282 error, 37, 40, 158 vector, 43 MSE, 29–34 data set, 1, 2, 9, 10, 14, Wage observations, 30 267, 269, 271, 272, set, 32 274–277, 280, 281, 283, time series, 94 284, 286, 287, 299 total sum of squares, 70 weakest link pruning, 308 tracking, 94 data set, 14, 171, 200 Weekly train, 21 weighted least squares, 96, 282 training within class covariance, 143 data, 21 workspace, 51 error, 37, 40, 158 wrapper, 289 MSE, 29–33

Related documents

435 441 458 467r e

435 441 458 467r e

WT/DS435/R, WT/DS441/R WT/DS458/R, WT/DS467/R 28 June 2018 Page: (18 - 1/884 4061 ) Original: English AUSTRALIA CERTAIN MEASURES CON CERNING TRADEMARKS, – PACKAGING IONS AND OTHER PLAIN GEOGRAPHICAL I...

More info »
CHURCH HISTORY IN THE FULNESS OF TIMES Student Manual

CHURCH HISTORY IN THE FULNESS OF TIMES Student Manual

HURCH C ISTORY H HURCH C H ISTORY IN THE ULNESS F IN THE ULNESS F OF T IMES OF IMES T S tudent M anual S anual M tudent RELIGION 341 THROUGH 343

More info »
AndersBehringBreivikManifesto

AndersBehringBreivikManifesto

2011 , London – By Andrew Berwick

More info »
An Introduction to Computer Networks

An Introduction to Computer Networks

An Introduction to Computer Networks Release 1.9.18 Peter L Dordal Mar 31, 2019

More info »
networks-book

networks-book

Networks, Crowds, and Markets: Reasoning about a Highly Connected World David Easley Jon Kleinberg Dept. of Economics Dept. of Computer Science Cornell University Cornell University Cambridge Universi...

More info »
50 Year Trends expanded version

50 Year Trends expanded version

Expanded Edition THE AMERICAN HERI A R CIRP T B I E N L G E C 50 F S I F R T A Y E Y FRESHMAN FIFTY-YEAR TRENDS | 1966-2015 KEVIN EAGAN | ELLEN BARA STOLZENBERG | JOSEPH J. RAMIREZ | MELISSA C. ARAGON...

More info »