Page 63 - 4660
P. 63
Joint distributions
Continuous bivariate distributions. In the case where both X and Y are continuous random
variables, the PDF of the joint distribution is defined by
f(x, y)dxdy = P(x < X ≤ x + dx, y < Y ≤ y + dy), (7.4)
so f(x, y)dxdy is the probability that x lies in the range [x, x+dx] and y lies in the range [y, y+dy].
It is clear that the two-dimensional function f(x, y) must be everywhere non-negative and that
normalization requires
∫ ∫
∞ ∞
f(x, y)dxdy = 1.
−∞ −∞
It follows further that
P(a 1 < X ≤ a 2 , b 1 < Y ≤ b 2 ) = f(x, y)dxdy. (7.5)
We can also define the cumulative probability function by
∫ x ∫ y
F(x, y) = P(X ≤ x, Y ≤ y) = f(u, v)dudv,
−∞ −∞
from which we see that (as for the discrete case),
P(a 1 < X ≤ a 2 , b 1 < Y ≤ b 2 ) = F(a 2 , b 2 ) − F(a 1 , b 2 ) − F(a 2 , b 1 ) + F(a 1 , b 1 ).
Finally we note that the definition of independence (7.3) for discrete bivariate distributions also
applies to continuous bivariate distributions.
Example 7.1. A flat table is ruled with parallel straight lines a distance D
apart, and a thin needle of length l < D is tossed onto the table at random. What
is the probability that the needle will cross a line? ,
Solution. Let θ be the angle that the needle makes with the lines, and let x be the distance
from the center of the needle to the nearest line. Since the needle is tossed ’at random’
onto the table, the angle θ is uniformly distributed in the interval [0, π], and the distance x is
uniformly distributed in the interval [0, D/2]. Assuming that θ and x are independent, their
joint distribution is just the product of their individual distributions, and is given by
1 1 2
f(θ, x) = = .
π D/2 πD
1
The needle will cross a line if the distance x of its centre from that line is less than l sin θ.
2
Thus the required probability is
∫ π ∫ 1 l sin θ ∫ π
2 2 2 l 2l
dxdθ = sin θdθ = .
πD 0 0 πD 2 0 πD
This gives an experimental (but cumbersome) method of determining π.
Marginal and conditional distributions. Given a bivariate distribution f(x, y), we may only
be interested in the probability function for X irrespective of the value of Y (or vice versa). This
marginal distribution of X is obtained by summing or integrating, as appropriate, the joint
probability distribution over all allowed values of Y. Thus, the marginal distribution of X (for
example) is given by
{
∑
f(x, y j ) for a discrete distribution,
f X (x) = ∫ j (7.6)
f(x, y)dy for a continuous distribution.
63