### Notes (Part 1):

### Notes (Part 2):

### Notes (Part 3):

### Notes (Part 4):

### Notes (Part 5):

### Notes (Part 6):

### Notes (Part 7):

### Notes (Part 8):

### Notes (Part 9):

### Notes (Part 10):

### Notes (Part 11):

### Notes (Part 12):

### Notes (Part 13):

### Notes (Part 14):

### Notes (Part 15):

### Notes (Part 16):

### Notes (Part 17):

### Notes (Part 18):

Here are the details of an example that was briefly sketched in lecture:

Suppose n letters are stuffed into n envelopes at random. What is the

probability that no letter ends up in its corresponding envelope.

Let $A_i$ be the probability that letter i ends up in envelope i. We wish to compute $1 - P[\cup_i A_i]$.

By inclusion exclusion, $P[\cup_i A_i] = (-1)^{k+1} \sum_{\{i_1, \ldots , i_k\}} P[A_{i_1} \cap \cdots \cap A_{i_k}]$

But $A_{i_1} \cap \cdots \cap A_{i_k}$ is the event that these particular $k$ letters end up in their envelopes. The number of ways this can happen is $(n-k)!$, and so the probability of this event is $\frac{(n-k)!}{n!}$. Since there are $n \choose k$ ways of picking the indices $i_1, \ldots , i_k$, the total contribution of these terms is ${\frac{(n-k)!}{n!}} {n \choose k} = {\frac{1}{k!}}$.

Plugging this back into the formula above, we get that $1 - P[\cup_i A_i] = 1 - \sum_k \frac{(-1)^k}{k!} \approx \frac{1}{e}$.

## Post preview:

Close preview