IQ Score Distribution
Understand the bell curve of IQ scores, what different ranges mean, and how the normal distribution shapes our interpretation of intelligence.
The Bell Curve Explained
IQ scores follow a mathematical pattern known as the normal distribution, or bell curve — so called because its shape resembles the silhouette of a bell. The vast majority of people cluster near the center of the distribution, with progressively fewer individuals at the extremes. This pattern is not arbitrary; it emerges naturally whenever a trait is influenced by many independent factors, as intelligence is. The central limit theorem in statistics tells us that when multiple independent variables contribute to an outcome, the resulting distribution tends toward normality.
In the standard IQ distribution, the mean is set at 100 and the standard deviation at 15. These parameters define the shape and spread of the curve. The mean represents the average score — the peak of the bell — and the standard deviation measures how far scores typically deviate from that average. A larger standard deviation would produce a wider, flatter curve, indicating more variability in the population; a smaller one would produce a narrower, taller curve.
What the Numbers Mean
Understanding where your score falls on the bell curve is key to interpreting its significance. Here is a breakdown of the major IQ ranges and what they represent:
Below 70 (approximately 2.3% of the population): Scores in this range may indicate an intellectual disability, though a single test score is never sufficient for diagnosis. Clinicians consider adaptive functioning, educational history, and daily living skills alongside test results.
70–84 (approximately 13.6% of the population): This range is sometimes described as below average or borderline. Individuals in this range may experience difficulties with complex academic tasks but can typically function independently in most areas of daily life.
85–115 (approximately 68.2% of the population): This is the average range, encompassing roughly two-thirds of the population. People in this range have the cognitive capacity to manage everyday intellectual demands, pursue higher education, and succeed in most occupations.
115–129 (approximately 13.6% of the population): Scores in this range indicate above-average intelligence. Individuals with IQs in this range often excel academically and tend to be well-represented in professional and managerial occupations.
130–144 (approximately 2.0% of the population): This range is generally classified as gifted. People scoring in this range demonstrate superior cognitive ability and may qualify for membership in high-IQ societies such as Mensa.
145 and above (approximately 0.1% of the population): This is the highly gifted or profoundly gifted range. Individuals at this level of ability are exceedingly rare — roughly 1 in 1,000 — and often display exceptional intellectual capabilities from an early age.
The Standard Deviation and Percentiles
The standard deviation is the key to translating IQ scores into percentiles — the percentage of the population that scores below a given value. In a normal distribution with mean 100 and standard deviation 15:
An IQ of 85 (one standard deviation below the mean) corresponds to the 16th percentile, meaning you scored higher than approximately 16% of the population. An IQ of 100 corresponds to the 50th percentile, by definition. An IQ of 115 (one standard deviation above the mean) corresponds to the 84th percentile. An IQ of 130 (two standard deviations above the mean) corresponds to approximately the 98th percentile — the threshold for Mensa membership. An IQ of 145 (three standard deviations above) corresponds to roughly the 99.9th percentile.
These translations rely on the properties of the normal distribution and the z-score formula: z = (IQ - 100) / 15. The z-score tells you how many standard deviations your score is from the mean, and standard statistical tables convert z-scores into precise percentiles.
Why the Distribution Matters
The bell curve distribution has profound implications for how we understand intelligence in society. Because the distribution is symmetric and unimodal, small differences in the middle of the range are far more common than at the extremes. The difference between an IQ of 100 and 105 is relatively minor — both are solidly average — whereas the difference between 140 and 145 is substantial, representing a jump from the 99.6th to the 99.9th percentile.
This statistical reality also means that extremely high or low scores are inherently less precise. At the tails of the distribution, there are fewer individuals in the norming sample, making it harder to establish reliable score estimates. A score of 160, for example, is so rare — roughly 1 in 30,000 — that its confidence interval is considerably wider than that of a score near the mean.
The Flynn Effect and Shifting Distributions
One of the most intriguing findings in intelligence research is the Flynn effect — the observation that raw IQ test scores have increased steadily over the twentieth century, at a rate of approximately 3 points per decade. This means that a person scoring at the 50th percentile on a test normed in 1980 would likely score above average on a test normed in 2020, not because they became smarter, but because the population average shifted.
The causes of the Flynn effect remain debated. Proposed explanations include improvements in nutrition, the expansion of education, increased environmental complexity, reduced exposure to infectious disease, and changes in test-taking strategies. Interestingly, some recent studies suggest the Flynn effect may be reversing in certain developed countries, with scores beginning to decline — a phenomenon that warrants close monitoring.
The practical consequence of the Flynn effect is that IQ test norms must be periodically updated. A test normed decades ago would overestimate contemporary IQ scores because the baseline of raw performance has shifted upward. This renorming process ensures that the distribution remains centered at 100, preserving the meaning of the score.
Skewness and Real-World Distributions
While the theoretical IQ distribution is a perfect bell curve, real-world data sometimes shows mild departures from normality. Some researchers have argued that the actual distribution is slightly positively skewed, meaning the right tail (very high scores) is somewhat longer than the left tail. This could occur if the genetic and environmental factors that boost intelligence are somewhat additive in their effects, creating a longer upper tail than a symmetric normal distribution would predict.
However, for practical purposes, the normal distribution provides an excellent approximation. The departures from perfect normality are small enough that they do not materially affect the interpretation of most IQ scores, particularly those within two standard deviations of the mean, which encompass over 95% of the population.
Interpreting Your Own Score
If you have taken or are considering an IQ test, understanding the bell curve helps you interpret your result in context. A score is not an absolute judgment of your worth or potential — it is a statistical statement about where your cognitive performance falls relative to a representative population. If you want to see where you land on the distribution, Take the IQ test and receive a detailed breakdown of your cognitive profile.
Frequently asked questions
What IQ score is considered average?
The average IQ score is 100 by definition. The range of 85 to 115 encompasses approximately 68% of the population and is considered the normal or average range.
What percentage of people have an IQ above 130?
Approximately 2% of the population scores above 130 on a standard IQ test. This corresponds to roughly two standard deviations above the mean and is the threshold for most high-IQ societies.
Can IQ scores change over time?
While general intelligence is relatively stable across the lifespan, IQ scores can fluctuate due to factors like test anxiety, health conditions, practice effects, and changes in the norming standards. The Flynn effect also means that raw performance changes across generations, which is why tests are periodically renormed.
Why is the bell curve used for IQ scores?
The bell curve (normal distribution) is used because intelligence, like many traits influenced by multiple independent factors, naturally tends to follow a normal distribution in the population. It also provides a convenient mathematical framework for converting raw scores into standardized, interpretable values.
What is the highest possible IQ score?
There is no theoretical upper limit to an IQ score, but extremely high scores become increasingly unreliable due to the scarcity of norming data at the extreme tails. Scores above 160 are exceedingly rare — roughly 1 in 30,000 — and their precision is limited by the statistical properties of the distribution.
Ready to test your cognitive abilities?
Take the IQ test