We prove the first non-trivial (superlinear) lower bound in the noisy broadcast model of distributed computation. In this model, there are n + 1 processors P 0, P 1, . . . , P n. Each P i, for i ≥ 1, initially has a private bit x i and the goal is for P 0 to learn f(x 1, . . . , x n)for some specified function f. At each time step, a designated processor broadcasts some function of its private bit and the bits it has heard so far. Each broadcast is received by the other processors but each reception may be corrupted by noise. In this model, Gallager  gave a noise-resistant protocol that allows P 0 to learn the entire input in O(n log log n) broadcasts. We prove that Gallager's protocol is optimal up to a constantfactor. Our lower bound follows from a lower bound in a new model, the generalized noisy decision tree model, which may be of independent interest.