Logistic regression: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
wikilink
(One intermediate revision by the same user not shown)
Line 80: Line 80:


Extensions of the model cope with multi-category dependent variables and ordinal dependent variables, such as polytomous regression. Multi-class classification by logistic regression is known as [[multinomial logit]] modeling. An extension of the logistic model to sets of interdependent variables is the [[conditional random field]].
Extensions of the model cope with multi-category dependent variables and ordinal dependent variables, such as polytomous regression. Multi-class classification by logistic regression is known as [[multinomial logit]] modeling. An extension of the logistic model to sets of interdependent variables is the [[conditional random field]].

== Model Accuracy ==
A way to test for errors in models created by step-wise regression, is to not rely on the model's F-statistic, significance, or multiple-r, but instead assess the model against a set of data that was not used to create the model<ref>Jonathan Mark and Michael A. Goldberg (2001). Multiple Regression Analysis and Mass Assessment: A Review of the Issues. The Appraisal Journal, Jan. pp. 89-109</ref>. This is often done by building a model based on a sample of the dataset available (i.e. 30%) and use the remaining 70% dataset to assess the accuracy of the model.

Accuracy is measured as correctly classified records in the holdout sample <ref>Mayers, J.H and Forgy E.W. (1963). The Development of numerical credit evaluation systems. Journal of the American Statistical Association, Vol.58 Issue 303 (Sept) pp 799-806</ref>. There are four possible classifications:
1) A predicted 0 when the holdout sample has a 0;
2) A predicted 0 when the holdout sample has a 1 (error);
3) A predicted 1 when the holdout sample has a 0 (error);
4) A predicted 1 when the holdout sample has a 1.

The percent of correctly classified observations in the holdout sample is referred to the assessed model accuracy. Additional accuracy can be expressed as the models ability to correctly classify 0, or the ability to correctly classify 1 in the holdout dataset. The hold-out model assessment method is particularly valuable when data is collected in different settings (i.e. time, social) or when models are assumed to be generalizable.


== See also ==
== See also ==

Revision as of 16:25, 22 December 2010

In statistics, logistic regression (sometimes called the logistic model or logit model) is used for prediction of the probability of occurrence of an event by fitting data to a logit function logistic curve. It is a generalized linear model used for binomial regression. Like many forms of regression analysis, it makes use of several predictor variables that may be either numerical or categorical. For example, the probability that a person has a heart attack within a specified time period might be predicted from knowledge of the person's age, sex and body mass index. Logistic regression is used extensively in the medical and social sciences fields, as well as marketing applications such as prediction of a customer's propensity to purchase a product or cease a subscription.

Definition

Figure 1. The logistic function, with z on the horizontal axis and ƒ(z) on the vertical axis

An explanation of logistic regression begins with an explanation of the logistic function:

A graph of the function is shown in figure 1. The input is z and the output is ƒ(z). The logistic function is useful because it can take as an input any value from negative infinity to positive infinity, whereas the output is confined to values between 0 and 1. The variable z represents the exposure to some set of independent variables, while ƒ(z) represents the probability of a particular outcome, given that set of explanatory variables. The variable z is a measure of the total contribution of all the independent variables used in the model and is known as the logit.

The variable z is usually defined as

where is called the "intercept" and , , , and so on, are called the "regression coefficients" of , , respectively. The intercept is the value of z when the value of all independent variables is zero (e.g. the value of z in someone with no risk factors). Each of the regression coefficients describes the size of the contribution of that risk factor. A positive regression coefficient means that the explanatory variable increases the probability of the outcome, while a negative regression coefficient means that the variable decreases the probability of that outcome; a large regression coefficient means that the risk factor strongly influences the probability of that outcome; while a near-zero regression coefficient means that that risk factor has little influence on the probability of that outcome.

Logistic regression is a useful way of describing the relationship between one or more independent variables (e.g., age, sex, etc.) and a binary response variable, expressed as a probability, that has only two possible values, such as death ("dead" or "not dead").

Sample size-dependent efficiency

Logistic regression tends to systematically overestimate odds ratios or beta coefficients in small and moderate samples (samples < 500 approximately). With increasing sample size the magnitude of overestimation diminishes and the estimated odds ratio asymptotically approaches the true population value. However, it was concluded that this overestimation might in a single study not have any relevance for the interpretation of the results since it is much lower than the standard error of the estimate. But if a number of small studies with systematically overestimated effect sizes are pooled together without consideration of this effect we may misinterpret evidence in the literature for an effect when in reality such effect does not exist[1].

A minimum of ten events per independent variable has been recommended.[2][3] For example, in a study where death is the outcome of interest, and there were 50 deaths out of 100 patients, the number of independent variables the model can support is 50/10 = 5.

Example

The application of a logistic regression may be illustrated using a fictitious example of death from heart disease. This simplified model uses only three risk factors (age, sex, and blood cholesterol level) to predict the 10-year risk of death from heart disease. This is the model that we fit:

Which means the model is

In this model, increasing age is associated with an increasing risk of death from heart disease (z goes up by 2.0 for every year over the age of 50), female sex is associated with a decreased risk of death from heart disease (z goes down by 1.0 if the patient is female), and increasing cholesterol is associated with an increasing risk of death (z goes up by 1.2 for each 1 mmol/L increase in cholesterol above 5 mmol/L).

We wish to use this model to predict Nathan Petrelli's risk of death from heart disease: he is 50 years old and his cholesterol level is 7.0 mmol/L. Nathan Petrelli's risk of death is therefore

This means that by this model, Nathan Petrelli's risk of dying from heart disease in the next 10 years is 0.07 (or 7%).

Formal mathematical specification

Logistic regression analyzes binomially distributed data of the form

where the numbers of Bernoulli trials ni are known and the probabilities of success pi are unknown. An example of this distribution is the fraction of seeds (pi) that germinate after ni are planted.

The model proposes for each trial i there is a set of explanatory variables that might inform the final probability. These explanatory variables can be thought of as being in a k-dimensional vector Xi and the model then takes the form

The logits, natural logs of the odds, of the unknown binomial probabilities are modeled as a linear function of the Xi.

Note that a particular element of Xi can be set to 1 for all i to yield an intercept in the model. The unknown parameters βj are usually estimated by maximum likelihood using a method common to all generalized linear models. The maximum likelihood estimates can be computed numerically by using iteratively reweighted least squares.

The interpretation of the βj parameter estimates is as the additive effect on the log of the odds for a unit change in the jth explanatory variable. In the case of a dichotomous explanatory variable, for instance gender, is the estimate of the odds of having the outcome for, say, males compared with females.

The model has an equivalent formulation

This functional form is commonly called a single-layer perceptron or single-layer artificial neural network. A single-layer neural network computes a continuous output instead of a step function. The derivative of pi with respect to X = x1...xk is computed from the general form:

where f(X) is an analytic function in X. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used in backpropagation. This function is also preferred because its derivative is easily calculated:

Extensions

Extensions of the model cope with multi-category dependent variables and ordinal dependent variables, such as polytomous regression. Multi-class classification by logistic regression is known as multinomial logit modeling. An extension of the logistic model to sets of interdependent variables is the conditional random field.

Model Accuracy

A way to test for errors in models created by step-wise regression, is to not rely on the model's F-statistic, significance, or multiple-r, but instead assess the model against a set of data that was not used to create the model[4]. This is often done by building a model based on a sample of the dataset available (i.e. 30%) and use the remaining 70% dataset to assess the accuracy of the model.

Accuracy is measured as correctly classified records in the holdout sample [5]. There are four possible classifications: 1) A predicted 0 when the holdout sample has a 0; 2) A predicted 0 when the holdout sample has a 1 (error); 3) A predicted 1 when the holdout sample has a 0 (error); 4) A predicted 1 when the holdout sample has a 1.

The percent of correctly classified observations in the holdout sample is referred to the assessed model accuracy. Additional accuracy can be expressed as the models ability to correctly classify 0, or the ability to correctly classify 1 in the holdout dataset. The hold-out model assessment method is particularly valuable when data is collected in different settings (i.e. time, social) or when models are assumed to be generalizable.

See also

References

  1. ^ Nemes S, Jonasson JM, Genell A, Steineck G. 2009 Bias in odds ratios by logistic regression modelling and sample size. BMC Medical Research Methodology 9:56 BioMedCentral
  2. ^ Peduzzi P, Concato J, Kemper E, Holford TR, Feinstein AR (1996). "A simulation study of the number of events per variable in logistic regression analysis". J Clin Epidemiol. 49 (12): 1373–9. PMID 8970487.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  3. ^ Agresti A (2007). "Building and applying logistic regression models". An Introduction to Categorical Data Analysis. Hoboken, New Jersey: Wiley. p. 138. ISBN 978-0-471-22618-5.
  4. ^ Jonathan Mark and Michael A. Goldberg (2001). Multiple Regression Analysis and Mass Assessment: A Review of the Issues. The Appraisal Journal, Jan. pp. 89-109
  5. ^ Mayers, J.H and Forgy E.W. (1963). The Development of numerical credit evaluation systems. Journal of the American Statistical Association, Vol.58 Issue 303 (Sept) pp 799-806
  • Agresti, Alan. (2002). Categorical Data Analysis. New York: Wiley-Interscience. ISBN 0-471-36093-7.
  • Amemiya, T. (1985). Advanced Econometrics. Harvard University Press. ISBN 0-674-00560-0.
  • Balakrishnan, N. (1991). Handbook of the Logistic Distribution. Marcel Dekker, Inc. ISBN 978-0824785871.
  • Greene, William H. (2003). Econometric Analysis, fifth edition. Prentice Hall. ISBN 0-13-066189-9.
  • Hilbe, Joseph M. (2009). Logistic Regression Models. Chapman & Hall/CRC Press. ISBN 978-1-4200-7575-5.
  • Hosmer, David W. (2000). Applied Logistic Regression, 2nd ed. New York; Chichester, Wiley. ISBN 0-471-35632-8. {{cite book}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)

External links