The finite frequency theory of probability defines the probability of an outcome as the frequency of the number of times the outcome occurs relative to the number of times that it could have occured. The finite relative frequency interpretation has been refined and is now largely superseded by the long run relative frequency interpretation. This is defined as the limiting frequency with which that outcome appears in a long series of similar events. probability = number of times the outcome occurs / number of times it could have occured Problems:


'On the relative frequency interpretation, developed by Venn (The Logic of Chance, 1866) and Reichenbach (The Theory of Probability, 1935), probability attaches to sets of events within a "reference class." Where W is the reference class, and n is the number of events in W, and m is the number of events in (or of kind) X, within W, then the probability of X, relative to W, is m/n. For various conceptual and technical reasons, this kind of "actual finite relative frequency" interpretation has been refined into various infinite and hypothetical infinite relative frequency accounts, where probability is defined in terms of limits of series of relative frequencies in finite (nested) populations of increasing sizes, sometimes involving hypothetical infinite extensions of an actual population. The reasons for these developments involve, e.g.: the artificial restriction, for finite populations, of probabilities to values of the form i/n, where n is the size of the reference class; the possibility of "mere coincidence" in the actual world, where these may not reflect the true physical dispositions involved in the relevant events; and the fact that probability is often thought to attach to possibilities involving single events, while probabilities on the relative frequency account attach to sets of events (this is the "problem of the single case," also called the "problem of the reference class").'
Audi (1999)
"Impressed by such example, many philosophers follow a long run frequency theory of probability, according to which the probability of a thing G having a property F is simply the long run frequency of Fs among Gs, that is, the limit towards which the proportion of Fs tends. According to von Mises, this limits probability judgments to classes for which there is such a limit and which satisfy other constraints making them into collectives [...], It makes it impossible to attach such a judgment to an individual event (such as my having a car accident), since any individual will fall into many different classes, in which the property has different limiting frequencies. But there is something extravagant about making probability into such a highly theoretical notion, both hard to know about and of no immediate relevance to short run confidence in events. More subtle frequency theories, such as that of Braithwaite, see the link between frequency and probability not as definitional, but in terms of the existence of rules telling us when a body of data entitles us to accept or reject a particular probability judgment."
Flew and Priest (2002)
'The frequency theory defines the probability of an outcome as the limiting frequency with which that outcome appears in a long series of similar events.'
Gillies (2000)
'The frequency theory defines probability in terms of the ratio of times something happens to times it might happen. If the proportion of smokers who die of cancer remains steady at 10 per cent then the probability of smokers dying of cancer is 10 per cent. Since most of the classes we are concerned with are open classes, the probability is defined as the limit, in the mathematical sense, to which the frequency tends in the long run. We often talk of the probability of single events, e.g. that Smith will die of cancer, and it is disputed how, if at all, the frequency can account for this. Also the notion of a limiting frequency raises a problem because in an infinite or openended series, such as tosses of a coin, any limiting frequency is compatible with any result in a finite run. If a penny falls heads a million times running the limiting frequency of heads could still be a half if it fell tails the next million, or indeed if it merely behaved normally the next million times. Therefore in applying the theory we seem to have to say things like ‘Probably the limiting frequency is this’ or ‘Probably present trends of cancer among smokers will continue’, where ‘probably’ is unexplained.'
Lacey (1996)
"Frequency interpretations of probability are also interpretations that see a relational notion as central: they offer accounts of the probability of something's being F, given it is G. The simplest version, the finite frequency theory, holds that the probability of F given G, P(F given G) = the number of Fs that are G divided by the number of Gs, that is, relative frequency of Fs among the Gs. It is a simple exercise in arithmetic to show that this interpretation satisfies the axioms of the calculus.
The finite frequency theory cannot handle cases where there are no Gs, and also contradicts our sense that it is possible for a finite frequency to fail to correspond to the real probability. For instance, a fair coin C could land heads on both of the two tosses that it is ever subjected to and yet still be fair. But then P (C lands heads given C is tossed) = 0.5 (because it is a fair coin), and yet the relative frequency of heads is one. Thus, the finite frequency interpretation has been largely superseded by the long run relative frequency interpretation: P(F given G) = the limit that the relative frequency of Fs among Gs would approach were there indefinitely many Gs. This interpretation does, however, raise a number of difficult issues to do with specifying the nature of the long run that it appeals to – the most famous of which is encapsulated in Keynes's remark that in the long run we are all dead."
Mautner (2000)