DILUTION OF IMPROBABILITY

In order to demonstrate how the improbability of a series of improbable events is diluted as elements are added to the set containing the series, it will be useful to consider random number generators with integer outputs from 00 to 99, or 000 to 999 etc. The output range corresponds to a sample space of weighted outcomes, and the state of a particular digit denotes the truth-value of a particular event.

Suppose there are two independent events whose probability is 0.1 or 1 in 10. If these were the only two events in the set, the sample space of all possibilities could be represented by the random generation of a two-digit number. "0" denotes an occurrence; "N" denotes anything from 1-9 and non-occurrence; "A" denotes any outcome from 0-9. So occurrence of both events requires a 00 and has 0.01 or 1 in 100 probability as expected.

If the set contains three events, the possibilities for at least two probability 0.1 events includes 00N, 0N0, N00 and 000 which is 3x9+1=28 from a total of 1,000 possibilities. With a four-element set, the 10,000 combinations offers 00NN, 0N0N, 0NN0, N00N, N0N0, NN00, 000N, 00N0, 0N00, N000, and 0000 which is 6x9^2 + 4x9^1 + 1x9^0 = 523. A five-element set offers 00NNN, 0N0NN, 0NN0N, 0NNN0, N00NN, N0N0N, N0NN0, NN00N, NN0N0, NNN00, 000NN, 00N0N, 00NN0, 0N00N, 0N0N0, 0NN00, N000N, N00N0, N0N00, NN000, 0000N, 000N0, 00N00, 0N000, N0000, 00000 which totals 10x9^3 + 10x9^2 + 5x9^1 + 1x9^0 = 8,146 from 100,000.

A six-element set offers 00NNNN, 0N0NNN, 0NN0NN, 0NNN0N, 0NNNN0, N00NNN, N0N0NN, N0NN0N, N0NNN0, NN00NN, NN0N0N, NN0NN0, NNN00N, NNN0N0, NNNN00, 000NNN, 00N0NN, 00NN0N, 00NNN0, 0N00NN, 0N0N0N, 0N0NN0, 0NN00N, 0NN0N0, 0NNN00, N000NN, N00N0N, N00NN0, N0N00N, N0N0N0, N0NN00, NN000N, NN00N0, NN0N00, NNN000, 0000NN, 000N0N, 000NN0, 00N00N, 00N0N0, 00NN00, 0N000N, 0N00N0, 0N0N00, 0NN000, N0000N, N000N0, N00N00, N0N000, NN0000, 00000N, 0000N0, 000N00, 00N000, 0N0000, N00000, 000000 which totals 15*9^4 + 20*9^3 + 15*9^2 + 6*9^1 + 1*9^0 = 114,265 from 1,000,000.

With 4 events from a set of nine, for example, there are 126 combinations of four zeros and five Ns, 126 of five zeros and four Ns, 84 of six zeros and three Ns, 36 of seven zeros and two Ns, 9 of eight zeros and a single N, and one combination of nine zeros. Allowing for the descending powers of nine, this totals 8,331,094 combinations from a sample space of one billion.

With 2 improbable events from a set of three, the probability is increased from 0.01 by a factor of 2.8; with 2 from 4 the multiplying factor is 5.23; with 2 from 5 it is 8.146; with 2 from 6 it is 11.4265. In the previous paragraph's example of 4 events from 9, the probability increases by a factor of 83.31094 as a result of the five additional elements.

For our purposes, a useful approximation can be obtained from the formula for the number of combinations of n elements taken r at a time, where:

nCr = n! / (r! * (n - r)!)

n is the number of elements or events in the set

r is the number of selected improbable events within the set.

Multiply the unadjusted probability of the series of events by nCr, or divide its reciprocal the unadjusted improbability by nCr, in order to obtain an adjusted measure which allows for the other elements of the set. These elements, consisting solely of events which meet the positive correlation of truth-value qualifications, would include events not conceived of and others that failed to occur.

The error introduced by this approximation exaggerates the dilution of improbability with increasing n. This biases in favour of the theory we are seeking to disprove - the mainstream media/government's version of events - by reducing its improbability. For example, allowing the combinations 00A, 0A0 and A00 ("A" being any outcome from 0 to 9) suggests 3x10=30 from 1,000, but this incorrectly counts 000 three times, The accurate but more complex method above (with the algorithm outlined below) shows the r=2, n=3 case comprising 00N, 0N0, N00 and 000 which is really 28 from 1,000 combinations. However, the error reduces as the number base B of our thought experiment random generator increases. If it were possible to limit our attention to a few improbable events with probabilities of less than 1 in 10,000, each digit of the random number has over 10,000 possibilities. This is possible for a major event such as 9/11.

Provided the number base B remains high, the nCr approximation remains considerably more accurate than our estimate of the probability of each event or the size n of the set. This holds for larger values of r, the number of selected improbable events. For example, suppose we have four events assumed to be of equal probability, and the product of their probabilities is 1/(3.2*10^25). We set B equal to the fourth root of 3.2*10^25, and find that the approximation is exaggerating the improbability dilution by only some 0.0000336% for every element in the set in excess of four. (0.0000336% being about 80% of the reciprocal of the fourth root of 3.2*10^25,)

The event improbabilities associated with London 7/7 are in the order of hundreds. We may use the nCr approximation as a guide, bearing in mind that in this case it is relatively inaccurate with a much greater exaggeration of improbability dilution.

The more precise calculation is obtained from a series with descending powers of B. In fact, we needn't expand (B-1). Input the variables B, n and r, and the number of combinations is totalled by summing the number of combinations of each power of (B-1). Start with (B-1)^(n-r) multiplied by nCr for the actual values of n and r. Then for each subsequent iteration r is incremented by one until r equals n. The accumulated total of all iterations yields the total number of combinations. This total is then multiplied by B^(r-n) which compensates for the larger sample space, to obtain the actual factor for division of improbability or multiplication of probability.


Input B
Input n
Input r
Set grossimprob to B ^ r
Set combinations to zero
For iteration is r to n
    a is n
    Call factorialcalculator
    nfact is factorial
    a is iteration
    Call factorialcalculator
    rfact is factorial
    a is n - iteration
    Call factorialcalculator
    nmrfact is factorial
    nCr is nfact / (rfact * nmrfact)
    combinations is combinations + (B - 1) ^ (n - iteration) * nCr
Endfor
Set truedilutionfactor to combinations * B ^ (r - n)
Print "True dilution factor ="; truedilutionfactor
Print "True probability = 1 in"; grossimprob / truedilutionfactor

Run

factorialcalculator
    factorial is 1
    For integer is 1 TO a
        factorial is factorial * integer
    Endfor
Return

We shall make use of another approximation which allows us to apply either of the above formulae to the complete set of events, eliminating the need to split events of various probability into alternative sections and apply the formulae to each section in turn: Let the product of the individual probabilities of the selected improbable events be P, and the number of selected improbable events be r. Then the probability of each improbable event is assumed to be the rth root of P, and denoted by B.

This approximation also biases in favour of the officially approved story. Consider an example of two 1 in 100 events and two 1 in 10,000 events in a set of ten. If we do the calculation for four 1 in 1,000 events in a set of ten, the nCr approximation indicates an improbability dilution of 210, since if n equals 10, r equals 4, then nCr equals 210. The unadjusted probability of 1 in 10^12 is increased to 1 in 4.7619*10^9. Using the exact algorithm, it is actually 1 in 4.7848*10^9, with the nCr approximation exaggerating the improbability dilution. Now compare with the actual case of two 1 in 100 events in a set of five along with two 1 in 10,000 events in another set of five. For the two 1 in 100 events the improbability dilution is 9.801496 by the exact method, and for the two 1 in 10,000 events it is 9.99800015. So the additional six elements has raised the probability from 1 in 10^12 by a factor of nearly 100, with the corrected probability at 1 in 1.020456495*10^10. The nCr approximation indicates a factor of 10 for each set of five, i.e. 100 in all and a corrected probability of 1 in 10^10. So the equal probabilities (EP) approximation has raised the corrected probability by a factor of more than two, the nCr approximation raised it by a little over 2%, and applying both exaggerated the dilution of improbability by a little more than application of the EP alone.

To summarise why the EP approximation exaggerates dilution of improbability, in the above paragraph 210 is greater than 10 squared. For some given values of n and r, a doubling of both will result in nCr increasing to rather more than its square.

The critic might consider that if we are going to split into sections for different probabilities, then maybe the higher probability events (being more commonplace) should take up more than half of the total set, and perhaps this would favour the official theory and show that the EP approximation is not kind to the official theory after all. It's a sound approach. Let's suppose we have two 1 in 100 events from a set of six and two 1 in 10,000 events from a set of four. Firstly, for the two 1 in 100 events from a set of six, the exact algorithm shows that the actual dilution factor is 14.60447605. Then, for the two 1 in 10,000 events from a set of four, the exact method shows the dilution factor is 5.99920003. The product of the two is only 87.61517316 compared to 97.99535848 or greater before. An unequal split lowers dilution and probability and raises improbability. Hence, it does not help the official theory, the equal split EP does instead!

So for an analysis of 9/11, we imagine a random number generator with the number base B being the reciprocal of the rth root of P. A '0' is required to denote the occurrence of an improbable event. Since each digit has over 10,000 variations, 'A' (any outcome) is very nearly equal to 'N' (any outcome but '0'), and the dilution of improbability is obtained fairly accurately from the approximate formula below:

The corrected probability for the series of improbable events, after allowing for the extra elements in the set, is given by:

Pdiluted = (P * n!) / (r! * (n - r)!)

where:

P is the product of the individual probabilities of unlikely events

n is the number of elements or events in the set

r is the number of selected improbable events within the set.

Or, more succinctly:

Pdiluted = P * nCr with the Google calculator carrying the nCr function as "n choose r".

For London 7/7, we use the nCr approximation as a guide, bearing in mind that this approach biases considerably in favour of the Government-approved version of events.

To conclude, ideally the analysis would be limited to a few very low probability events rather than a large quantity of fairly low probability events. The latter will require the more complex algorithm to compute improbability dilution to a reasonable precision, identification of additional anomalous events, and in any case would establish guilt to a lower degree of certainty. Given that we assume that the total number n of elements in the set is more than twice the number r of selected improbable events (which we do), it follows that for a given product of probabilities P, any increase in r will increase the dilution of improbability. Also, increasing r to include higher probability events with a lower correlation with the test theory's truth-value implies that we should assume a greater value for n, which would further dilute the improbability.

In the case of 9/11, Israel's guilt can be proven to at least the same degree of certainty as a case involving DNA profiling and multiple corroborative evidence, i.e. the probability of innocence is to the order of a billion to one against after allowing for improbability dilution. The corrected, diluted probability of Israel being innocent of London 7/7 is tens of thousands to one against.