I couldn’t sleep and kept thinking about the Monty Hall problem. The next day I wrote a HIP simulation to brute-force it on the GPU and watch the probabilities converge (switch ≈ 66.6%, stay ≈ 33.3% for 3 doors). I generalized to N doors and let the GPU chew through hundreds of millions of trials in seconds.
HIP + hipRAND simulation that supports any door count (3–128) and both strategies (switch/stay).
GPU-parallel kernel with contiguous memory access and configurable grid sizes for massive iteration counts.
CLI flags for strategy, iterations, and doors; prints device info, kernel launch params, and win/loss rates.
Quick Python plotting script to show convergence over time.
Win rates converging to ~66.6% (switch) and ~33.3% (stay) for 3 doors.
GPU acceleration: 100M iterations in ~0.7 s on an RX 6900 XT; >1000× faster than a single-threaded Python loop at similar scales.
Generalized doors: Defaults to 3, but runs up to 128 doors with the same kernel.
Portability: HIP targets AMD and NVIDIA GPUs; hipRAND drives the randomness.
Robustness: Guards for invalid door counts, grid sizing, and memory limits.
The classic counterintuitive result holds: switching wins about twice as often as staying.
GPUs make it trivial to brute-force probability problems at absurd scales, letting you see convergence in seconds.
HIP’s CUDA-like model and hipRAND make quick, portable experiments straightforward.
Source