[[RD the Bayesian way|Chapter 2.2.4]] invokes the default `brms` priors as [[Weakly informative prior distributions|weakly informative]]. What were they actually? We can see the details using a single-line prompt:
```r
get_prior(future_spending ~ status_platinum + x_c + status_platinum:x_c,
data = df_local)
```
```r
prior class coef
(flat) b
(flat) b status_platinumTRUE
(flat) b status_platinumTRUE:x_c
(flat) b x_c
student_t(3, 9375.3, 1393.1) Intercept
student_t(3, 0, 1393.1) sigma
```
The intercept and the residual standard deviation get [[Weakly informative prior distributions|weakly informative]] `student_t` priors centered on the data's location and scale. But every regression coefficient, including our focal coefficient for the Platinum status, gets a **flat improper prior**. The coefficient posterior is essentially likelihood-driven, while `brms` applies weakly informative defaults to the intercept and residual scale. So when we report posterior mean \$1,574 with HDI [\$1,506, \$1,643] in [[RD the Bayesian way|the chapter]], that is the posterior under a flat prior, which agrees with the [[Statistical modeling of RD|frequentist estimate]] of \$1,567 essentially because both are likelihood-driven.
So if we put a *real* prior on the focal coefficient, the posterior should move, right? Let's try three alternatives:
- **Tight informative** around the original business hypothesis of [[Design Pattern II - Regression Discontinuity (RD)|\$2,000]]: $\mathcal{N}(2000, 200)$
- **Skeptical** around zero: $\mathcal{N}(0, 500)$
- **Very flat** (proper but uninformative): $\mathcal{N}(0, 10000)$
```r
prior_tight <- prior(normal(2000, 200), class = "b", coef = "status_platinumTRUE")
prior_skep <- prior(normal(0, 500), class = "b", coef = "status_platinumTRUE")
prior_flat <- prior(normal(0, 10000), class = "b", coef = "status_platinumTRUE")
```
We'll refit `model_rd` four times — once with the default flat prior, once with each of the three above — and tabulate the posterior of the treatment effect at the cutoff. These priors refer to the unweighted, uniform-bandwidth model used in the chapter; a triangular-weighted variant would be a slightly different Bayesian model.
```r
spec mean hdi_lo hdi_hi p_gt_0
Default (brms weakly informative) 1574 1504 1641 1
Flat: normal(0, 10000) 1574 1501 1640 1
Skeptical: normal(0, 500) 1565 1492 1635 1
Tight: normal(2000, 200) 1586 1518 1657 1
```
The four posteriors are mostly consistent. The default and the proper-flat prior $\mathcal{N}(0, 10000)$ are indistinguishable, which is a sanity check — at this scale the proper flat is functionally equivalent to the improper flat. The skeptical prior pulls the posterior mean down by \$9 toward zero; the tight prior pulls it up by \$12 toward \$2,000. Neither is too far away.
![[oh-my-priors-rd.png]]
The four densities are all but stacked on top of each other. This is partly because the likelihood is sharp enough to overwhelm any of these priors with the [[MSE-optimal bandwidth|MSE-optimal bandwidth]] window holding 13,535 observations. Even the tight prior, a normal with $\sigma = 200$, which would dominate a likelihood from, say, 30 observations, is invisible against a likelihood with thousands of observations on each side of the cutoff.
> [!NOTE]
> The fact that priors barely matter here is not a property of Bayesian RD generally; it is a property of *this* dataset and bandwidth. Shrink to a [[Oh my! Bandwidth sensitivity in the RD model|local-randomization-style window]] of \$200, and you would have hundreds of observations rather than thousands; tighten further to dozens, and the tight prior would noticeably pull the posterior toward \$2,000. Bayesian RD with a small effective sample is a place where priors matter more, and where the prior sensitivity exercise above becomes essential.
So, defaults are fine *when the data are rich enough to drown them out*, which is exactly when a prior would have done the least anyway. The genuinely load-bearing case for Bayesian RD — small-bandwidth, sparse-cutoff settings where uncertainty quantification and probability statements matter most — is the regime where the default flat prior on the coefficient is least defensible.
All in all, "we used the default weakly informative priors" is the kind of statement that should always be unpacked because the defaults may be more or less informative than the label suggests.
> [!info]- Last updated: May 14, 2026