Skip to main content
返回术语表
statistics难度:基础statisticsprobabilityexpected-value
Share

数学期望shùxué qīwàng

expected value / mathematical expectation
4 分钟阅读
更新于 2025-01-24
已完成

Core Concept

The expected value (or mathematical expectation) of a random variable is the weighted average of all possible values, where the weights are the probabilities.

Discrete Random Variable

For a discrete random variable XX taking values x1,x2,,xnx_1, x_2, \ldots, x_n with probabilities p1,p2,,pnp_1, p_2, \ldots, p_n:

E(X)=i=1nxipi=x1p1+x2p2++xnpnE(X) = \sum_{i=1}^{n} x_i \cdot p_i = x_1p_1 + x_2p_2 + \cdots + x_np_n

Notation

  • E(X)E(X) - Expected value of XX
  • μ\mu (mu) - Often used to denote expected value
  • X\overline{X} - Sample mean (estimate of E(X)E(X))

Interpretation

The expected value represents:

  • The long-run average of many independent trials
  • The center of mass of the probability distribution
  • The fair value in gambling/finance contexts

Important: The expected value may not be an actually possible outcome.

Properties of Expected Value

1. Linearity

E(aX+b)=aE(X)+bE(aX + b) = aE(X) + b

where aa and bb are constants.

2. Sum of Random Variables

E(X+Y)=E(X)+E(Y)E(X + Y) = E(X) + E(Y)

This holds even if XX and YY are NOT independent.

3. Product of Independent Variables

If XX and YY are independent: E(XY)=E(X)E(Y)E(XY) = E(X) \cdot E(Y)

4. Expected Value of a Constant

E(c)=cE(c) = c

Common Distributions

DistributionExpected Value
Bernoulli(pp)pp
Binomial(n,pn, p)npnp
Uniform({1,2,...,n}\{1,2,...,n\})n+12\dfrac{n+1}{2}
Geometric(pp)1p\dfrac{1}{p}

CSCA Practice Problems

💡 Note: The following practice problems are designed based on the CSCA exam syllabus.

Example 1: Basic (Difficulty ★★☆☆☆)

A random variable XX has the following distribution:

XX123
PP0.20.50.3

Find E(X)E(X).

Solution: E(X)=1×0.2+2×0.5+3×0.3E(X) = 1 \times 0.2 + 2 \times 0.5 + 3 \times 0.3 =0.2+1.0+0.9=2.1= 0.2 + 1.0 + 0.9 = 2.1

Answer: E(X)=2.1E(X) = 2.1


Example 2: Intermediate (Difficulty ★★★☆☆)

If E(X)=3E(X) = 3, find E(2X+5)E(2X + 5).

Solution:

Using linearity: E(2X+5)=2E(X)+5=2(3)+5=11E(2X + 5) = 2E(X) + 5 = 2(3) + 5 = 11

Answer: 1111


Example 3: Intermediate (Difficulty ★★★☆☆)

A fair coin is tossed 100 times. Let XX be the number of heads. Find E(X)E(X).

Solution:

XX follows Binomial distribution with n=100n = 100, p=0.5p = 0.5.

E(X)=np=100×0.5=50E(X) = np = 100 \times 0.5 = 50

Answer: E(X)=50E(X) = 50


Example 4: Advanced (Difficulty ★★★★☆)

A box contains 3 red and 2 white balls. Two balls are drawn without replacement. Let XX be the number of red balls drawn. Find E(X)E(X).

Solution:

Find the distribution of XX:

P(X=0)=C22C52=110P(X = 0) = \dfrac{C_2^2}{C_5^2} = \dfrac{1}{10} (both white)

P(X=1)=C31C21C52=610P(X = 1) = \dfrac{C_3^1 \cdot C_2^1}{C_5^2} = \dfrac{6}{10} (one red, one white)

P(X=2)=C32C52=310P(X = 2) = \dfrac{C_3^2}{C_5^2} = \dfrac{3}{10} (both red)

E(X)=0×110+1×610+2×310E(X) = 0 \times \frac{1}{10} + 1 \times \frac{6}{10} + 2 \times \frac{3}{10} =0+0.6+0.6=1.2= 0 + 0.6 + 0.6 = 1.2

Answer: E(X)=1.2E(X) = 1.2


Example 5: Advanced (Difficulty ★★★★★)

If E(X)=2E(X) = 2 and E(X2)=8E(X^2) = 8, find E((X1)2)E((X-1)^2).

Solution:

Expand: E((X1)2)=E(X22X+1)E((X-1)^2) = E(X^2 - 2X + 1) =E(X2)2E(X)+1= E(X^2) - 2E(X) + 1 =82(2)+1=84+1=5= 8 - 2(2) + 1 = 8 - 4 + 1 = 5

Answer: 55

Real-World Applications

1. Fair Games

A game is "fair" if E(profit)=0E(\text{profit}) = 0.

Example: You pay ¥2 to flip a coin. Heads: win ¥4. Tails: win nothing. E(profit)=0.5×(42)+0.5×(02)=11=0E(\text{profit}) = 0.5 \times (4-2) + 0.5 \times (0-2) = 1 - 1 = 0 This is a fair game.

2. Insurance

Insurance companies use expected value to set premiums.

3. Investment Analysis

Expected return helps compare investment options.

Common Mistakes

❌ Mistake 1: Confusing E(X) with Most Likely Value

Wrong: E(X)E(X) is the value that occurs most often ✗

Correct: E(X)E(X) is the weighted average; mode is the most frequent value ✓

❌ Mistake 2: Forgetting Probabilities Must Sum to 1

Before calculating, verify: pi=1\sum p_i = 1

❌ Mistake 3: Wrong Linearity Application

Wrong: E(X2)=(E(X))2E(X^2) = (E(X))^2

Correct: Generally E(X2)(E(X))2E(X^2) \neq (E(X))^2. The difference is the variance! ✓

Relationship with Variance

Var(X)=E(X2)(E(X))2\text{Var}(X) = E(X^2) - (E(X))^2

Or equivalently: E(X2)=Var(X)+(E(X))2E(X^2) = \text{Var}(X) + (E(X))^2

Study Tips

  1. Remember the formula: E(X)=xipiE(X) = \sum x_i p_i
  2. Master linearity: E(aX+b)=aE(X)+bE(aX+b) = aE(X)+b
  3. Know common distributions: Binomial expected value is npnp
  4. Don't confuse with variance: E(X2)(E(X))2E(X^2) \neq (E(X))^2

💡 Exam Tip: When given a probability distribution table, first verify probabilities sum to 1, then apply the definition directly!