In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate.[1] The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.[2][3][4]
If the likelihood function is differentiable, the derivative test for determining maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved explicitly; for instance, the ordinary least squares estimator maximizes the likelihood of the linear regression model.[5] Under most circumstances, however, numerical methods will be necessary to find the maximum of the likelihood function.
From the point of view of Bayesian inference, MLE is a special case of maximum a posteriori estimation (MAP) that assumes a uniform prior distribution of the parameters. In frequentist inference, MLE is a special case of an extremum estimator, with the objective function being the likelihood.
Principles
The goal of maximum likelihood estimation is to find the values of the model parameters that maximize the likelihood function over the parameter space, that is
In practice, it is often convenient to work with the natural logarithm of the likelihood function, called the log-likelihood:
While the domain of the likelihood function—the parameter space—is generally a finite-dimensional subset of Euclidean space, additional restrictions sometimes need to be incorporated into the estimation process. The parameter space can be expressed as
In practice, restrictions are usually imposed using the method of Lagrange which, given the constraints as defined above, leads to the restricted likelihood equations
Properties
Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value.[15] However, like other estimation methods, maximum likelihood estimation possesses a number of attractive limiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties:
- Consistency: the sequence of MLEs converges in probability to the value being estimated.
- Efficiency, i.e. it achieves the Cramér–Rao lower bound when the sample size tends to infinity. This means that no consistent estimator has lower asymptotic mean squared error than the MLE (or other estimators attaining this bound).
- Second-order efficiency after correction for bias.
Under slightly stronger conditions, the estimator converges almost surely (or strongly):
To establish consistency, the following conditions are sufficient.[16]
where I is the Fisher information matrix.
It maximizes the so-called profile likelihood:
For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data.
As noted above, the maximum likelihood estimator is √n -consistent and asymptotically efficient, meaning that it reaches the Cramér–Rao bound:
In particular, it means that the bias of the maximum likelihood estimator is equal to zero up to the order 1⁄√n . However, when we consider the higher-order terms in the expansion of the distribution of this estimator, it turns out that θmle has bias of order 1⁄n. This bias is equal to (componentwise)[19]
Using these formulae it is possible to estimate the second-order bias of the maximum likelihood estimator, and correct for that bias by subtracting it:
This estimator is unbiased up to the terms of order 1⁄n, and is called the bias-corrected maximum likelihood estimator.
This bias-corrected estimator is second-order efficient (at least within the curved exponential family), meaning that it has minimal mean squared error among all second-order bias-corrected estimators, up to the terms of the order 1⁄n2 . It is possible to continue this process, that is to derive the third-order bias-correction term, and so on. However the maximum likelihood estimator is not third-order efficient.[20]
Relation to Bayesian inference
A maximum likelihood estimator coincides with the most probable Bayesian estimator given a uniform prior distribution on the parameters. Indeed, the maximum a posteriori estimate is the parameter θ that maximizes the probability of θ given the data, given by Bayes' theorem:
Examples
Suppose one wishes to determine just how biased an unfair coin is. Call the probability of tossing a ‘head’ p. The goal then becomes to determine p.
Suppose the coin is tossed 80 times: i.e. the sample might be something like x1 = H, x2 = T, ..., x80 = T, and the count of the number of heads "H" is observed.
The probability of tossing tails is 1 − p (so here p is θ above). Suppose the outcome is 49 heads and 31 tails, and suppose the coin was taken from a box containing three coins: one which gives heads with probability p = 1⁄3, one which gives heads with probability p = 1⁄2 and another which gives heads with probability p = 2⁄3. The coins have lost their labels, so which one it was is unknown. Using maximum likelihood estimation, the coin that has the largest likelihood can be found, given the data that were observed. By using the probability mass function of the binomial distribution with sample size equal to 80, number successes equal to 49 but for different values of p (the "probability of success"), the likelihood function (defined below) takes one of three values:
The likelihood is maximized when p = 2⁄3, and so this is the maximum likelihood estimate for p.
Now suppose that there was only one coin but its p could have been any value 0 ≤ p ≤ 1. The likelihood function to be maximised is
and the maximisation is over all possible values 0 ≤ p ≤ 1.
One way to maximize this function is by differentiating with respect to p and setting to zero:
which has solutions p = 0, p = 1, and p = 49⁄80. The solution which maximizes the likelihood is clearly p = 49⁄80 (since p = 0 and p = 1 result in a likelihood of zero). Thus the maximum likelihood estimator for p is 49⁄80.
This result is easily generalized by substituting a letter such as s in the place of 49 to represent the observed number of 'successes' of our Bernoulli trials, and a letter such as n in the place of 80 to represent the number of Bernoulli trials. Exactly the same calculation yields s⁄n which is the maximum likelihood estimator for any sequence of n Bernoulli trials resulting in s 'successes'.
the corresponding probability density function for a sample of n independent identically distributed normal random variables (the likelihood) is
Since the logarithm function itself is a continuous strictly increasing function over the range of the likelihood, the values which maximize the likelihood will also maximize its logarithm (the log-likelihood itself is not necessarily strictly increasing). The log-likelihood can be written as follows:
(Note: the log-likelihood is closely related to information entropy and Fisher information.)
We now compute the derivatives of this log-likelihood as follows.
This is indeed the maximum of the function, since it is the only turning point in μ and the second derivative is strictly less than zero. Its expected value is equal to the parameter μ of the given distribution,
Similarly we differentiate the log-likelihood with respect to σ and equate to zero:
which is solved by
In this case the MLEs could be obtained individually. In general this may not be the case, and the MLEs would have to be obtained simultaneously.
The normal log-likelihood at its maximum takes a particularly simple form:
This maximum log-likelihood can be shown to be the same for more general least squares, even for non-linear least squares. This is often used in determining likelihood-based approximate confidence intervals and confidence regions, which are generally more accurate than those using the asymptotic normality discussed above.
Non-independent variables
In the bivariate case, the joint probability density function is given by:
In this and other cases where a joint density function exists, the likelihood function is defined as above, in the section "principles," using this density.
Iterative procedures
Except for special cases, the likelihood equations
Although popular, quasi-Newton methods may converge to a stationary point that is not necessarily a local or global maximum,[27] but rather a local minimum or a saddle point. Therefore, it is important to assess the validity of the obtained solution to the likelihood equations, by verifying that the Hessian, evaluated at the solution, is both negative definite and well-conditioned.[28]
History
Early users of maximum likelihood were Carl Friedrich Gauss, Pierre-Simon Laplace, Thorvald N. Thiele, and Francis Ysidro Edgeworth.[29][30] However, its widespread use rose between 1912 and 1922 when Ronald Fisher recommended, widely popularized, and carefully analyzed maximum-likelihood estimation (with fruitless attempts at proofs).[31]
Maximum-likelihood estimation finally transcended heuristic justification in a proof published by Samuel S. Wilks in 1938, now called Wilks' theorem.[32] The theorem shows that the error in the logarithm of likelihood values for estimates from multiple independent observations is asymptotically χ 2-distributed, which enables convenient determination of a confidence region around any estimate of the parameters. The only difficult part of Wilks’ proof depends on the expected value of the Fisher information matrix, which is provided by a theorem proven by Fisher.[33] Wilks continued to improve on the generality of the theorem throughout his life, with his most general proof published in 1962.[34]
Reviews of the development of maximum likelihood estimation have been provided by a number of authors.[35][36][37][38][39][40][41][42]
See also
- Generalized method of moments are methods related to the likelihood equation in maximum likelihood estimation
- M-estimator, an approach used in robust statistics
- Maximum a posteriori (MAP) estimator, for a contrast in the way to calculate estimators when prior knowledge is postulated
- Maximum spacing estimation, a related method that is more robust in many situations
- Maximum entropy estimation
- Method of moments (statistics), another popular method for finding parameters of distributions
- Method of support, a variation of the maximum likelihood technique
- Minimum distance estimation
- Partial likelihood methods for panel data
- Quasi-maximum likelihood estimator, an MLE estimator that is misspecified, but still consistent
- Restricted maximum likelihood, a variation using a likelihood function calculated from a transformed set of data
- Akaike information criterion, a criterion to compare statistical models, based on MLE
- Extremum estimator, a more general class of estimators to which MLE belongs
- Fisher information, information matrix, its relationship to covariance matrix of ML estimates
- Mean squared error, a measure of how 'good' an estimator of a distributional parameter is (be it the maximum likelihood estimator or some other estimator)
- RANSAC, a method to estimate parameters of a mathematical model given data that contains outliers
- Rao–Blackwell theorem, which yields a process for finding the best possible unbiased estimator (in the sense of having minimal mean squared error); the MLE is often a good starting place for the process
- Wilks’ theorem provides a means of estimating the size and shape of the region of roughly equally-probable estimates for the population's parameter values, using the information from a single sample, using a chi-squared distribution