Your tests aren't flaky... they're broken. Fix them!

23 July 2025

We’ve all encountered that one test that usually passes, but sometimes, for no obvious reason, flashes red. Same code, different result. Rerun and it’s magically green again.

That isn’t “your tests being weird again.” It’s your code, or your test, waving a giant red flag begging for help. Your first impulse might be to rerun the test and forget about it, after all, tests sometimes fail, right?

That’s a bad impulse; crush it ruthlessly. Flaky tests are dangerous.

Why are flaky tests dangerous?
🚩 Mask real bugs: race conditions, dirty state, time‑zone glitches, you name it.
🚩 Erode trust: devs hammer rerun by reflex, real regressions slip through.
🚩 Cost money and time: Every retry blocks merges and burns CI minutes.
🚩 Compound risk: ignore a flake in staging, and it may bite in production.

There are a lot of causes for flakiness, but here are the usual suspects:
⚡ Race conditions / async timing
🧹 Leaked state between tests
⏰ Fixed time or locale dependence
🎲 Buggy test data generation, i.e., randomly generated test data with no guaranteed uniqueness
🌐 Network or third‑party calls

Some advice
Address immediately: fix on the spot, or raise a high‑priority ticket.
Stress test: after fixing, run the test several times to ensure consistency.
Reduce nondeterminism: seed random number generators, freeze time, and mock external calls where practical.
Fix the root cause: flakes often expose real concurrency or state bugs.

A test that sometimes fails is a problem, not an annoyance. Fix it.

djangsters GmbH

Vogelsanger Straße 187
50825 Köln

Sortlist