Intelligence is one of the most complex of human characteristics. It has been described by some as a general capacity for comprehension and reasoning; others view it in a more specific way, relating it to individual abilities such as learning, memory, insight, judgment and creativity. Indeed, some researchers suggest that as many as 120 distinct abilities underlie intellectual functioning! Obviously it is difficult to consider all of these, so which abilities are the most important? What do we mean when we say someone is ‘intelligent’?
The American psychologist Louis L. Thurstone (1887-1955) in 1938 proposed seven primary abilities as the basic elements of intelligence: memory, reasoning, verbal comprehension, verbal fluency, ability to work with numbers, ability to visualize space-form relationships, and perceptual speed (the ability to grasp visual details quickly and recognize similarities and differences). But what of a person who is a brilliant musician, who can play a variety of instruments and compose masterpieces, but who performs poorly in maths and grammer and fails at school? Is that person intelligent?
The American psychologist David Wechsler (1896-1981) has defined intelligence as ‘the aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment’. This may not be the most satisfactory of definitions, but it does allow for the fact that so-called intelligent people can – and do – differ widely in their knowledge and skills. This is clearly true from even basic observations of other people. Is it correct, then, or even possible, to compare one person with another on the basis of intelligence?
Despite the difficulties in measuring intelligence and making comparisons there have, through the centuries, been many ingenious methods devised to do just this. In classical Greece, and in ancient and modern folklore, the ability to answer riddles correctly was considered to be a comparative measure of superior intellect. Today, a variety of intelligence tests are more reliably used for the same purpose. Whether such tests measure exactly what they claim to measure has been, and still is, the subject of much controversy. The first useful and most widely-accepted intelligence test was developed in 1905 by a French psychologist, Alfred Binet (1857-1911). This was in response to a request by the French government to devise a test that would detect those children too slow intellectually to benefit from the regular school curriculum. Binet’s test contained more than 30 sub-tests investigating various abilities such as memory, general knowledge, word definition and comprehension. These sub-tests were given to ‘normal’ children whose scores were grouped according to age. Because certain levels of mental performance are reached at different ages, the 98 testers could compare a child’s chronological age with his or her mental age as indicated by the scores in the tests. For example, if a ten-year-old child performed only as well as an ‘average’ eight-year-old then this child would be considered intellectually delayed by two years.
This test, known as the Binet-Simon test, has undergone many revisions simultaneously with the perfecting of techniques for the ‘valid’ testing of intelligence in general. The modern version is known as the Stan-ford-Binet test, since the more recent changes have been made by Professor Louis Terman (1877-1956) at Stanford University in the United States. However there are many other tests of general intelligence, developed for use with both children and adults. Most are based on the Binet principle of several sub-tests for various abilities which can then be combined to give an overall score. Among the most popular and widely used of today’s tests are the Wechsler Adult Intelligence Scale (WAIS), the Wechsler Intelligence Scale for Children (WISC) and, within the last decade, the British Ability Scales (BAS) for children between six and 16 years old. The significance of the BAS test is that it has been specifically developed within the British culture and therefore may be more valid for use in Western Europe than the other tests mentioned, which were originally designed for use in the United States. This illustrates the importance of ‘culture fairness’. If an intelligence test is not matched to the individual’s cultural background, a poor score may result not because of inability but simply because of non-familiarity with the test material, which may be culturally biased.
Besides the abilities tested, the ways of measuring and scoring the tests have also changed over the years. Soon after the emergence of the Binet-Simon test, the German philosopher and psychologist William Stern (1871-1938) pointed out that being two years ‘behind’ at the age of four was very different from being two years behind at the age often. He suggested the idea of mental quotient (MO), which is mental age divided by chronological age. Eventually this concept was developed further into the now widely-used intelligence quotient (IQ) which is simply the MQ multiplied by 100. For example, a child ten years old whose mental age is assessed as that of an average ten-year-old has an IQ of 100 (10/10 x 100). A child of ten who is two years delayed in mental age as measured by the test has an IQ of 80 (8/10 x 100). In fact, there is a range of IQ score between 75 and 125 (WISC and WAIS) which is considered within normal limits.
For adults an intelligence qotient based upon mental age, does not hold of course. It is useless to speak of, for instance, a sixteen year old with a mental age of a forty year old. The deviation-IQ for adults was introduced by David Wechsler. The deviation-IQ is not the quotient of a division, but compares an individual’s score with the average of his contemporaries. In this case the intelligence test, which is done twice by the same person within a short space of time, has to show more or less the same results; otherwise it is not a valid test.
An intelligence test derives its usefulnes not only from its provision of a measure on which to compare people, although this is quite often done. It is also possible to test one person repeatedly and thus judge his intellectual development. With a child experiencing learning problems for instance, it is important to identify the particular areas of weakness so that they can be given extra attention. We should now perhaps ask what exactly an intelligence test measures and to what standards it has to conform. If one eventually wants to say a test does measure intelligence, the standard taken might be that an 10 test should predict school success. This is correct in so far as there should be, in general, a relation between the results of an intelligence test and those at school. But factors such as perseverance and emotional stability also play a role. Moreover, there are social abilities, such as charm, which may show themselves in popularity, and by perhaps a more positive attitude from the teachers. Repeated defining of the 10 may also be of use for people who have brain damage, in order to judge to what extent the intellectual capacities recover.
Intelligence and age
The development of intelligence is influenced by a number of internal and external factors. The Swiss zoologist and psychologist Jean W.F. Piaget (1896-1980) stated that intelligence develops in stages. Until the age of about two years, we speak of the sensorimotor stage, during which the child learns to distinguish between itself and its environment. During the pre-operational stage, which begins at about two years old, the capacity for abstract thinking begins to develop, but the child still considers itself to be the centre of the world. From the age of four the child begins to use numbers and develop conservation concepts. The use of logical laws and rules, and even-tually the ability to make hypotheses, develops during the operational stages between the ages of seven and 12 years. An important factor concerning the development of intelligence is the maturation of the brain and the nervous system. A stimulating environment also has a favourable influence. During this period, there may be large fluctuations of IQ. After the age of six, the capacity for intellectual development increases until about the age of 30. During this time there is a gradual increase in the number of synapses between brain cells. To illustrate the significance, it is useful to draw an analogy between the brain and a computer, in so far that the greater the possible number of connections it can make, the more powerful is its capacity and performance.
After this period of growth the 10 usually remains stable for ten years, after which there may be a per-ceptible decline. The degree to which this occurs – and especially which abilities decrease and most – depends on several factors; for instance, on someone’s health or profession. People who remain healthy and continue doing work that stimulates the intellect will show little decline until about the age of seventy. The first signs of decline are usually related to the memory, especially that of recent events, and, for instance, the speed of calculating.
Hereditary or acquired?
There is now little dispute regarding the process and order of intellectual development. Controversy is still strong, however, over the question of the extent to which genes or environmental factors determine the degree of development, although most scientistst agree that at least some aspects of intelligence are inherited.
The most informative studies have been made of identical twins (siblings with the same heredity) who had, because of their circumstances, become separated at birth and reared in different environments -one considered favourable, the other unfavourable. Given two children with the same genes, the child with the better nutrition, the more intellectually stimulating and emotionally secure home, and the more appropriate rewards for academic accomplishment will obtain the higher IQ score. Thus we can arrive at the conclusion that, although genes form the basis for intellectual capacity, it is the environment which allows it to show to its full advantage. Unfortunately, there exists a minority for whom the environment does not have that favourable influence and in whom the intellectual skills are therefore below average.
The less gifted and mentally disabled people also often have physical and emotional problems, which hinder the normal development of intelligence. They may suffer, for instance, from Down’s syndrome or spina bifida. The effect of such a handicap on the intellectual development may differ considerably, from a marginal to a serious hindrance. That does not necessarily mean that children with such disabilities are incapable of learning, but that the learning process progresses more slowly, demanding special skills from their caretakers.
At the other extreme there are intellectually ‘gifted’ people. Professor Terman followed a group of 1,500 gifted children through to adulthood. The study showed that such children were physically taller, bigger and healthier than average. They walked and talked earlier, they were more socially skilled and were more adaptable than normal. Such ‘superiority’ remained throughout adult life, although it was not necessarily passed to their children. Some were also considerably more ‘successful’ in life. However, there was little difference in their respective Iqs to account for the variation in general achievement, so the conclusion is either that non-intellectual qualities (not measured by 10 tests) are also important for life success, or that IQ tests do not measure all kinds of intellectual abilities.
Intelligence versus inventiveness
One of the most notable shortcomings of an IQ test is that it tells us very little about an individual’s creativity or inventiveness. A few special tests have been developed with which it should be possible to measure creativity. However, it appears to be very difficult to be specific about the relationship between intelligence and creativity. The average IQ score of children who are considered to be ‘creative’ is lower than of those who are not. Creativity, therefore, is best seen as a different form of intelligence. A form that cannot be measured by means of IQ tests.
In the end what do we understand by the concept of intelligence? It seems to be composed of a myriad of varied and interacting abilities, dependent on both the genes of the individual and the social and physical environment in which that person has lived. It is certainly a complex subject that cannot, as yet, be wholly categorized or understood. Perhaps the old philosophical conundrum is true after all: one cannot measure something, if that thing is being used to do the measuring.
The ability to learn is perhaps what we generally mean by intelligence. Earthworms and rats certainly have this ability, and some experimenters have even claimed that single-celled amoeba have a rudimentary memory system in which learned events are stored. Exactly where instinctual behaviour ends and learning begins has long been a puzzle. In the seventeenth century the English philosopher, John Locke, wrote: ‘let us suppose the mind to be, as we say, white paper void of all characters, without any ideas.’ He believed that human beings, unlike animals, rely little on instinct or inborn patterns of behaviour, and more on adaptability through their capacity to learn. Learning, he said, is the most fundamental of human activities.
The key to survival
At birth the behaviour of a human baby is very limited, but as the baby grows a staggering complexity of response to the world develops. By the age of seven most children have learned to understand speech and to talk; to read and write a little; the social skills necessary for successful interaction with family and friends; the basis of selfcare; and to manipulate simple, and for example, in the case of a bicycle, not-so-simple, mechanical objects. Almost all the basic skills and knowledge required for everyday life are learned in the early years. How does this come about?
Before the time of Locke, people believed human beings were born possessing all such knowledge innately. The Greek philosopher Plato (427-347 BC) coined the notorious ‘learning paradox’: you can learn only what you do not know, but if you do not know it how can you seek to learn it? Today, however, scientists believe that human knowledge is largely acquired through experience. Learning is a relatively permanent change in behaviour that occurs as a result of experience.
Attempts to explain the learning process further have resulted in controversy because of the still-unresolved mystery of ‘mind’ and ‘body’ – the complex interrelation between thought, personality and emotion on the one hand, and brain and nervous system on the other. This mystery has produced a myriad questions which are only partially answered. Are the smile and laughter of an infant instinctive or learned?
How do fledgling birds know where to migrate to when abandoned by their parents? How can the two halves of a severed earthworm successfully negotiate a maze that the whole worm was previously taught to negotiate? Where are learning and memory stored?
In the attempt to explain the phenomenon of learning, two main theories have emerged: the ‘behav-ioural’ and the ‘cognitive’. The Behaviourists developed a theory of learning by observing how learned behaviour varied with environmental conditions – for example, what stimulus conditions and patterns of reward and punishment lead to the fastest learning with the fewest errors. Behaviourists distin-guished between two forms of learning: classical conditioning and operant conditioning. Classical conditioning, which is the most basic form of learning, was studied extensively by the Russian physiologist Ivan P. Pavlov. In his experiments he demonstrated that dogs could be trained to associate the sound of a bell, which produced a salivatory response, with the taste of food. In operant conditioning, of which the American B.F. Skinner was the major exponent, an organism learns that some response it makes leads to a particular consequence.
For example, if you feel hot (stimulus) you may just by chance open a window (response), which has the desired effect of cooling you down. If you are able to respond in the same way on future occasions to being hot, you can be said to have learned an effective solution to the problem. The speed and effectiveness of the learning depend on the strength and frequency of the stimulus and the success and availability of the response.
Taking account of perception
Cognitive psychologists, however, argued that only simple forms of learning can be explained in this way. For more complex forms of learning one must take into account perception and understanding. According to this approach, ‘learning’ a poem involves not only being able to repeat back all the words in the right order; it also requires an understanding of the poet’s meaning or message. The celebrated experiments of Wolfgang Kohler (1887-1967) on chimpanzees showed that higher mammals learn through the process of realization the ‘ah-ha reflex’ we all recognize after successfully and consciously solving a problem. So in addition to stimulus and response, we must acknowledge additional aspects of the learning process: trialanderror and, on a higher level of consciousness still, ‘insight learning’ – the resolution of problems through perceiving the relationships between objects and actions.
Learning and memory
From a combined behaviouristand-cognitive viewpoint, we can say that learning consists of: . a change in behaviour based on connecting one action or object with another, to solve a problem or fulfil a need of the individual, . memorizing this connection, skill or solution, . the ability to retrieve the connection, skill or solution from memory and execute it without conscious thought or remembrance.
Learning, then, is the act of feeding information into the memory and being able to retrieve it, but the exact nature of this ‘memory’ is another mystery. The capacity of the brain is vast; the problem of ‘remembering’ is a problem of information retrieval, not of storage. Scientists now believe that memory may occur at a sub-miscroscopic level within the individual brain cell with each act of ‘memorizing’ taking place as a minute rearrangement of the molecular structure.
Human needs and self fulfilment
Learning is not always an easy process; it can be painful and demanding. So why do we learn at all? What experiences are most important? Why do some people respond to learning experiences faster and more competently than others? What is the relationship of learning to personality, motivation, emotional and physical development, and social behaviour? Although some learning is self-motivated – for example by the desire to learn to play a musical instrument, or learn a foreign language – much is unconscious, or received from parents or teachers. The attempt to measure and compare the rate at which individuals learn, for example using graphs known as ‘learning curves’, has shown that learning cannot be divorced from such intangible and non-measurable human behaviours as love and enjoyment.
Humanist psychologists such as Abraham H. Maslow (1908-1970) have argued that we must view the goal of learning not simply as the acquisition of skills and techniques, but as the development of the individual in terms of a harmonious inner balance of self-restraint and self-expression, happiness, wisdom, responsibility and self-knowledge. This, the highest form of human existence, is achievable only after the individual has learned to secure physiological needs (food and drink), safety needs (shelter, clothing, protection), social needs (affection), and the need for esteem or respect, in an ascending hierarchy. However, only by recognizing the interdependence of these needs can we assist the learning process. Punishment, fear and misery will usually result in slowness, reluctance or inability to learn, whereas reward, enjoyment and happiness are likely to increase the speed and efficiency of learning.