Aug 31 / Iakovos Koukas

History of Intelligence Tests

Sir Francis Galton was the first expert who attempted to create a standardized test measuring intelligence in 1884. He theorized a correlation between intelligence and observable human traits such as reflexes and head size. Alfred Binet and Théodore Simon created the first modern IQ test in 1905, called the Binet-Simon test, which focused on knowledge questions and verbal abilities. It was intended to distinguish mental retardation in school children.

The score on the Binet-Simon scale would show the child's mental age, as older children are more cognitively advanced than younger ones. Alfred Binet identified the mean age at which children, on average, could solve each item of the test and then categorized items accordingly. This way, he could estimate a children's position relative to their peers: if a child could solve items that were, on average, only solved by children who were four years older, then this child would be four years ahead in mental development. IQ was calculated as (mental age/chronological age) multiplied by 100.

Instead of subtracting actual age from the age estimated from test performance, William Stern suggested that mental age should be divided by chronological age. Hence, Intelligence Quotient or IQ was born and defined as (mental age) / (chronological age).

American psychologist Lewis Terman at Stanford University revised the initial Binet-Simon scale, creating a much more appropriate norm than the original, which resulted in the Stanford-Binet Intelligence Scales (1916). He kept calculating IQ as (mental age/chronological age) X 100. This method, however, only works well in children because the method is inappropriate when cognitive development does not take place anymore.

David Wechsler developed the first version of his test (Wechsler Adult Intelligence Scale) in 1939. He solved the problem of calculating adult IQ by comparing performance to the distribution of IQ test scores, which is a normal distribution. In his method, the IQ of those whose score was equal to the mean of their age group was 100. This way, the IQ of the average adult person would be 100, just like the IQ of the average child. He used the normal distribution, also known as the Gaussian distribution (which is symmetric about the mean), to assign IQ scores based on the extent of their peers one outscored. For instance, someone whose score was two standard deviations above the norm would outperform 98% of their peers, thus having an IQ of 130, and so on. WAIS has been revised several times to incorporate new research, and it’s more popular than any other professional IQ test used today.