The credit decisions you make are dependent on the data, models, and tools that you use to determine them. *Developing Credit Risk Models Using SAS Enterprise Miner and SAS/STAT: Theory and Applications* combines both theoretical explanation and practical applications to define as well as demonstrate how you can build credit risk models using SAS Enterprise Miner and SAS/STAT and apply them into practice.

The ultimate goal of credit risk is to reduce losses through better and more reliable credit decisions that can be developed and deployed quickly. In this example-driven book, Dr. Brown breaks down the required modeling steps and details how this would be achieved through the implementation of SAS Enterprise Miner and SAS/STAT.

Users will solve real-world risk problems as well as comprehensively walk through model development while addressing key concepts in credit risk modeling. The book is aimed at credit risk analysts in retail banking, but its applications apply to risk modeling outside of the retail banking sphere. Those who would benefit from this book include credit risk analysts and managers alike, as well as analysts working in fraud, Basel compliancy, and marketing analytics. It is targeted for intermediate users with a specific business focus and some programming background is required.

Efficient and effective management of the entire credit risk model lifecycle process enables you to make better credit decisions. *Developing Credit Risk Models Using SAS Enterprise Miner and SAS/STAT: Theory and Applications* demonstrates how practitioners can more accurately develop credit risk models as well as implement them in a timely fashion.

This book is part of the SAS Press Program.

80/20, and 70/30 are used. A test set may be used for further model tuning, such as with neural network models. Figure 2.4 shows an example of an Enterprise Miner Data Partition node and property panel with a 70% randomly sampled Training set and 30% Validation set selected. Figure 2.4: Enterprise Miner Data Partition Node and Property Panel (Sample Tab) 2.2.2 Variable Selection Organizations often have access to a large number of potential variables that could be used in the modeling process.

Presented below in Figure 4.3 (the example LGD_DATA needs to be standardized first using proc standard): Figure 4.3: Proc nlmixed Example Code The model statement of proc nlmixed is set to model stnd_LGD as a general likelihood function. This code can be formulated within a SAS Code node within the Enterprise Miner environment as shown in Figure 4.4. More information detailing the syntax for proc nlmixed can be found in the SAS/STAT documentation. Figure 4.4: Beta Regression (SAS Code.

Variable Information Value Credit percentage usage 1.825 Undrawn percentage 1.825 Undrawn 1.581 Relative change in undrawn amount (12 months) 0.696 For more information about the Gini Statistic and Information Value, see Chapter 3. Strength Statistics Another way to evaluate the model are by using the strength statistics: Kolmogorov-Smirnov, Area Under the ROC, and Gini Coefficient. Table 5.9: Model Strength Statistics Statistic Value KS Statistic 0.481 AUC 0.850 Gini 0.700 Kolmogorov-Smirnov.

Category) or not defaulted (good category) on a loan. When preparing data for use with SAS Enterprise Miner, one must first identify how the data will be treated. Figure 2.2 shows how the data is divided into categories. Figure 2.2: Enterprise Miner Data Source Wizard The Enterprise Miner Data Source Wizard automatically assigns estimated levels to the data being brought into the workspace. This should then be explored to determine whether the correct levels have been assigned to the variables.

8 Segmentation Throughout this chapter, we will utilize SAS software capabilities and show with examples how each technique can be achieved in practice. 2.2 Sampling and Variable Selection In this section, we discuss the topics of data sampling and variable selection in the context of credit risk model development. We explore how sampling methodologies are chosen as well as how data is partitioned into separate roles for use in model building and validation. The techniques that are available.