The c*lassical*
or *theoretical* definition of
probability assumes that there are a finite number of outcomes in a situation
and all the outcomes are equally likely.

**Classical
Definition of Probability**

_{} |

Though you probably have not seen this definition before, you probably have an inherent grasp of the concept. In other words, you could guess the probabilities without knowing the definition.

Cards and Dice The examples that follow require some knowledge of cards and dice. Here are the basic facts needed compute probabilities concerning cards and dice.

A standard deck of cards has four suites: hearts, clubs, spades, diamonds. Each suite has thirteen cards: ace, 2, 3, 4, 5, 6, 7, 8, 9, 10, jack, queen and king. Thus the entire deck has 52 cards total.

When you are asked about the probability of choosing a certain card from a deck of cards, you assume that the cards have been well-shuffled, and that each card in the deck is visible, though face down, so you do not know what the suite or value of the card is.

A pair of *dice*
consists of two cubes with dots on each side. One of the cubes is called a *die*, and each die has six sides.Each side of a die has a number of dots (1,
2, 3, 4, 5 or 6), and each number of dots appears only once.

Example 1 The probability of choosing a heart from a deck of cards is given by

_{} |

Example 2 The probability of choosing a three from a deck of cards is

_{} |

Example 3 The probability of a two coming up after rolling a die (singular for dice) is

_{} |

The classical definition works well in determining probabilities for games of chance like poker or roulette, because the stated assumptions readily apply in these cases. Unfortunately, if you wanted to find the probability of something like rain tomorrow or of a licensed driver in Louisiana being involved in an auto accident this year, the classical definition does not apply. Fortunately, there is another definition of probability to apply in these cases.

**Empirical
Definition of Probability**

The probability of event *A*
is the** **number approached by

_{} |

as the total number of recorded outcomes becomes "very large."

The idea that the fraction in
the previous definition will approach a certain number as the total number of
recorded outcomes becomes very large is called the *Law of Large Numbers*. Because of this law, when the Classical
Definition applies to an event *A*,
the probabilities found by either definition should be the same. In other words, if you keep rolling a die,
the ratio of the total number of twos to the total number of rolls should
approach one-sixth. Similarly, if you draw a card, record its number, return
the card, shuffle the deck, and repeat the process; as the number of
repetitions increases, the total number of threes over the total number of
repetitions should approach 1/13 ≈ 0.0769.

In working with the empirical definition, most of the time you have to settle for an estimate of the probability involved. This estimate is thus called an empirical estimate.

**Example 4** To estimate the probability of a licensed driver in Louisiana being involved in an auto accident this year, you could use the ratio

_{} |

To do better than that, you could use the number of
accidents for the last five years and the total number of

**Example 5**
Estimating the probability of rain tomorrow would be a little more difficult. You
could note today's temperature, barometric pressure, prevailing wind direction,
and whether or not there are rain clouds that could be blown into your area by
tomorrow. Then you could find all days on record in the past with similar
temperatures, pressures, and wind directions, and clouds in the right location. Your rainfall estimate would then be the ratio

_{} |

To make your estimate better, you might want to add in humidity, wind speed, or season of the year. Or maybe if there seemed to be no relation between humidity levels and rainfall, you might want add in the days that did not meet your humidity level requirements and thus increase the total number of days.

**Example 6** If you want to estimate the probability that a dam will burst, or a bridge will
collapse, or a skyscraper will topple, there is usually not much past data
available. The next best thing is to do a computer simulation. Simulation results can be compiled a lot faster with a lot less money and less loss of life than actual events. The estimated probability of say a bridge collapsing would be given by the following fraction

_{} |

The more true to life the simulation is, the better the estimate will be.

**Basic Probability
Rules **For either definition, the probability of an event A is always a
number between zero and one, inclusive; i.e.

_{} |

Sometimes probability values are written using percentages, in which case the rule just given is written as follows

_{} |

If the event *A* is not possible, then *P*(*A*) = 0 or *P*(A) = 0%. If event *A* is certain to occur, then *P*(A) = 1 or *P*(A)
= 100%.

The sum of the probabilities for each possible outcome of an experiment is 1 or 100%. This is written mathematically as follows using the capital Greek letter sigma (S) to denote summation.

_{} |

**Probability Scale* **The best
way to find out what the probability of an event means is to compute the
probability of a number of events you are familiar with and consider how the
probabilities you compute correspond to how frequently the events occur. Until
you have computed a large number of probabilities and developed your own sense
of what probabilities mean, you can use the following probability scale as a
rough starting point. When you gain more experience with probabilities, you may
want to change some terminology or move the boundaries of the different
regions.

*This is a revised and expanded
version of the probability scale presented in Mario Triola, *Elementary Statistics Using the Graphing
Calculator: For the TI-83/84 Plus*, Pearson Education, Inc. 2005, page 135.