
Our first article explored Rosling’s Dramatic Instincts—the biases that lead CIOs and Project Managers to overreact to crises, vendor hype, and boardroom pressure.
This article explores the very humans tendency to take a ‘good enough’ approach to problem solving.
“Heuristics” is the official name, but it is also known as the ‘Rules of Thumb‘.
Are you aware of what these rules of thumb are?
Fortunately, the late Hans Rosling had an idea (or 10) …
Wikipedia | Obituary
1) Locate the majority — “Is there really a gap?”
(i.e. avoid exaggerating a “gap” narrative; check whether the “us vs them / worst-off” framing is misleading)
As an example the UK Government Digital Service (GDS) mandates teams start with the majority of user needs rather than edge cases. The team starts with user needs and design for the bulk of users rather than edge cases.
Before funding or build, teams validate where the majority of users actually are and what they need (so the “gap” isn’t assumed).
See the example here: GOV.UK Service Standard, “Understand users and their needs.” GOV.UK
2) Expect negative news — “Would improvement get attention?”
(i.e. the negativity instinct: we notice declines more than improvements)
Example: Incident metrics reporting
A security team might get loud attention when there’s a breach, but quieter successes (e.g. zero incidents for six months) rarely get reported. A mature CIO may invert that instinct: require dashboards that surface positive trends as well as negative ones. For example, “number of security posture improvements completed,” or “mean time to patch reduced over past year.” This helps balance the narrative so that risk mitigation is not only crisis-driven.
Analogue in project delivery
Project managers often fight for attention when things go wrong; improvements (e.g. defect rates falling, cycle times improving) often go unnoticed. A leader might institute periodic “positive-hype reports” that explicitly highlight improvements—this counters the bias that only bad news matters.
There are other examples, jobs figures for instance. Layoffs command news attention, but a return to the increased workforce of only 12 months ago is left unreported.
3) Imagine bending lines — “Why would this line not bend?”
(i.e. reject assumption of linear trends; anticipate inflection points)
Case: Telecom adoption curves / mobile data usage
Telco companies long assumed that data usage would continue doubling annually in a straight line. But saturation, pricing shifts, and regulatory caps tended to cause adoption rates to ‘bend’. Some telco CIOs built that non-linear ceiling into forecasts to avoid over-investing in capacity that would be underutilised after an inflection.
Project context / analogue
You might see a linear schedule extension and assume the same slope continues. But a project sponsor might push additional funding or priority, bending the trend. A factful PM will model possible inflection points (e.g. vendor arriving early, or resource scaling) and include scenario curves, rather than a straight-line extension.
A great case study is when the Netflix leadership pivoted from DVDs to streaming, anticipating the curve (non-linear demand shift) and building architecture for the inflection.
They didn’t extrapolate a straight line of DVD growth; they planned for the bend and moved early.
Source: Case write-up on Netflix’s transition from DVD rental to streaming. Oxford Executive Institute
4) Calculate the risk — “Is it really dangerous?”
(i.e. don’t be dominated by fear; quantify absolute, not just relative, risk)
Example: Data breach vs system downtime
An executive board might dramatize a small data leak with sensational headlines. But a CIO quantifies: how many records? What is the regulatory fine exposure? What is the probability of escalation vs the cost of overreaction? Many responses (e.g. shifting everything to air-gapped systems) are overkill once you model risk vs cost.
Case from literature
In project risk management literature, risk triage is standard: only a small set of risks with high probability & high impact merit mitigation. Low-probability, low-impact risks are accepted or monitored. This is exactly a “calculate, don’t dramatize” mindset.
Relevant Cybersecurity examples include after WannaCry, the NHS and the UK National Audit Office documented a formal risk-based response (patching, segmentation, preparedness) rather than panic buys.
The NHS leaders quantified impact/probability and hardened their systems where risk was highest, instead of a knee-jerk reaction of spending more money.
Sources: NAO Investigation (2017); NHS “Lessons learned” CIO review (2018). National Audit Office (NAO)+1
5) Check the proportions — “Is it big in comparison?”
(i.e. raw numbers can mislead; always relate to denominators or context)
Example: Vendor cost escalation
A vendor raises costs by $2M. That sounds large, but relative to a $500M portfolio, it might be 0.4 %. A CIO might choose to accept it and reallocate elsewhere rather than blow a justified disruptor. Without checking proportions, one might treat $2M as catastrophic.
Analogue in project metrics
You might hear “we have 500 defects,” which sounds huge. But if the system has 1,000,000 lines of code, that’s a defect density of 0.5 per thousand LOC—if competitor benchmarks show 0.7, you’re doing better. Project leads need to check proportions to see whether the “big number” is really alarming or par for the course.
Other examples include when the Dropbox leadership compared the unit economics of hyperscale storage vs. owning infrastructure itself. It chose a reverse-migration off AWS (“Magic Pocket”).
The Dropbox leadership looked past a scary absolute number (hundreds of PB) to proportional costs and long-run economics, and made the right decision.
Sources: Wired feature on Dropbox’s AWS exit; InformationWeek on the 500PB move; analysis five years on. WIRED+2Information Week+2
6) Question your categories — “How are they different?”
(i.e. avoid rigid “buckets,” especially overgeneralising)
Example: “Legacy vendor” vs “Innovation partner”
Executives often label older suppliers as “legacy” or “obsolete.” But in many cases those same vendors evolved or have complementary niche services. During a large bank’s digital transformation, the CIO reorganized vendors not by “legacy vs modern” but by capability axis (e.g. core ops, API, cloud migration), and repurposed some “legacy” suppliers into integration roles.
Project analogy
You might categorize stakeholders as “resistive / supporters / neutral” too crudely. But individual stakeholders evolve. By slicing them across concern dimension (budget, compliance, performance) instead, you get a more nuanced view, and can tailor engagement.
Another example would be when ING’s exec team (Netherlands) re-categorized work from traditional departments to tribes/squads/chapters during its agile transformation. They challenged the old “IT vs Business / department” categories and reorganised the business by value stream.
Sources: Harvard Business School case; Harvard Business Review article. Harvard Business School+1
7) Notice slow changes — “Isn’t it always changing slowly?”
(i.e. change is often incremental, not dramatic)
Example: Culture shift interventions
When transforming IT culture (e.g. DevOps adoption), the changes show up slowly—smaller lead times, fewer failures, small process tweaks—not giant overnight shifts. A CIO who expects overnight revolution will be disappointed. The realistic leader tracks incremental KPIs (e.g. deployment frequency, rollback rate) and accepts that visible change is cumulative.
Analogue: Portfolio performance
In a multi-year program, benefits often accumulate gradually. For instance, reducing overhead by 2 % each year yields compounding benefits over 5 years more than waiting for a “big bang” reorg. Portfolio leaders who capture and report the slow upward drift keep stakeholder confidence and momentum.
Another example is when Capital One’s tech leadership spent years moving 100% to cloud, exiting data centres in 2020—then continued methodically improving.
The benefits only accumulated gradually, however the leadership communicated the multi-year arc instead of promising overnight revolution.
Source: Capital One engineering blog: “Lessons from our cloud migration journey.” Capital One
8) Use multiple tools — “What other solutions exist?”
(i.e. don’t rely on a single lens or method; mix tools, perspectives)
CIO / Executive example:
Rather than depend only on financial metrics, mature executives combine balanced scorecards, scenario planning, systems modeling, and stakeholder sentiment dashboards. If one tool shows “all good,” another might reveal fragility. How many businesses depend on only one dashboard?
PM / Project example
In a large infrastructure project, the PM uses Earned Value Management (EVM), risk heat maps, Monte Carlo simulations, and stakeholder sentiment analysis together. If EVM says “on track” but stakeholder feedback says “slipping,” the PM catches misalignment early.
Another example is when Google SRE institutionalised postmortems that draw on multiple analysis techniques (5-whys, fault trees, timelines, etc.) to understand incidents. Multiple analysis techniques do not rely on a single lens, they combine tools to get a full picture.
Sources: Google SRE Book chapter “Postmortem Culture”; SRE conference deck. sre.google+1
9) Resist pointing finger — “What system made this possible?”
(i.e. avoid the blame instinct; instead look for systemic causes)
Case: Healthcare IT outages
If a hospital’s EMR goes down repeatedly, the blame might fall on the vendor. A factful CIO might instead audit change control, network architecture, capacity buffers, deploy fault tolerance. The real cause might be brittle dependencies, not just vendor error.
Project example
When a project misses deadlines, it’s easy to blame a specific team or person. But a systems-minded PM will look at interdependencies, bottlenecks, resource allocation, feedback loops, and governance as underlying causes. This is a core tenet of systems thinking in projects. Literature supports this: project management scholars call for systems thinking to uncover root cause structures rather than linear blame.
Source: Project Management Institute
Another example is Google’s blameless postmortem policy which explicitly forbids blaming individuals and focuses on systemic contributors and design/ops fixes.
This is the antidote to the blame instinct—repairing the system, not the person.
Source: Google SRE Book chapter “Postmortem Culture.” sre.google
10) Take small steps — “Can we make decisions as we go?”
(i.e. don’t overcommit before testing; use incremental experiments)
Example: Agile pilot before full rollout
Before migrating the entire IT estate to a new cloud platform, a CIO might run a pilot on 5% of services, validate assumptions, learn, and then scale. That is literally “small steps.”
Project context
Project leaders often adopt phased delivery: deliver a minimal viable product (MVP), validate with users, and iterate. This reduces risk and allows course corrections. This incremental approach embodies Rosling’s rule of thumb about urgency vs planning.
In the UK the Government Digital Service (GDS) mandates phased delivery with continual checks before scaling. These are small, reversible steps with evidence between phases, exactly as per Rosling’s “small steps” rule.
Sources: GDS Agile delivery guidance; GDS Way playbook. GOV.UK+1
SUMMARY
The greatest gift of Factfulness is perspective. It reminds us that progress and problems coexist — that the world, and our projects, can be both bad and better at the same time. For CIOs and project managers, that balance matters. When we pause before reacting, measure before judging, and seek systems before blaming individuals, we move from drama to data, from fear to proportion.
In the noise of transformation and technology, Factfulness becomes a quiet form of leadership — calm, curious, and deeply human.
Hans can have the last words, a poignant reminder of why we should practice Factfulness.
“Factfulness is about the stress-reducing habit of only carrying opinions for which you have strong supporting facts.”

Further Reading
- Factfulness book
- How I Learned to Understand the World
- Factfulness Part One
- GAPMINDER – where Factfulness is used to inform

Join the conversation — how do you Square the Triangle?