Blog
Simulating the Monty Hall problem on GPU with HIP
A one-day HIP build that brute-forces Monty Hall (N doors) to verify switch vs stay odds at scale.
Published08-08-2025
Tags
HIPCUDAGPUSimulationProbability
Why
I couldn’t sleep and kept thinking about the Monty Hall problem. The next day I wrote a HIP simulation to brute-force it on the GPU and watch the probabilities converge (switch ≈ 66.6%, stay ≈ 33.3% for 3 doors). I generalized to N doors and let the GPU chew through hundreds of millions of trials in seconds.
What I built in a day
- HIP + hipRAND simulation that supports any door count (3–128) and both strategies (switch/stay).
- GPU-parallel kernel with contiguous memory access and configurable grid sizes for massive iteration counts.
- CLI flags for strategy, iterations, and doors; prints device info, kernel launch params, and win/loss rates.
- Quick Python plotting script to show convergence over time.
Interesting bits
- GPU acceleration: 100M iterations in ~0.7 s on an RX 6900 XT; >1000× faster than a single-threaded Python loop at similar scales.
- Generalized doors: Defaults to 3, but runs up to 128 doors with the same kernel.
- Portability: HIP targets AMD and NVIDIA GPUs; hipRAND drives the randomness.
- Robustness: Guards for invalid door counts, grid sizing, and memory limits.
Takeaways
- The classic counterintuitive result holds: switching wins about twice as often as staying.
- GPUs make it trivial to brute-force probability problems at absurd scales, letting you see convergence in seconds.
- HIP’s CUDA-like model and hipRAND make quick, portable experiments straightforward.