kirschner-ED

Schemas vs. Models

Lieve, Muis, Bubbels, Daisy

What’s in a name? In Shakespeare’s play Romeo and Juliet, Juliet says from her balcony “A rose by any other name would smell as sweet” meaning that a person’s essential nature remains unchanged, regardless of their name. When it comes to psychology, the answer is: Sometimes the name means a lot.

In the last couple of weeks I’ve come across a number of articles and blogs that seem to see cognitive schemas and mental models as (virtually) synonymous. Though they are integrally related to each other, they are however different. We also use both constantly but for different things. The following is my attempt to help people distinguish between the two. It is partially based on a new chapter that Pedro De Bruyckere and I are writing on misconceptions in learning. I hope it clears things up (and sorry that it’s so long!).

A cognitive schema is a mental framework that helps us process, categorise, and interpret information quickly for everyday functioning. It is a generalised knowledge structure based on our past experiences – in the world, in school, etc.— and is used to understand new situations which influence perception, memory, and problem-solving by allowing us to make sense of the world efficiently. I discussed this earlier in a about schemas, chunking, and working memory and it contains a great example of chihuahuas in a schema. It is the cognitive tool we use to simulate reality in our minds.

A mental model, on the other hand, is an internal representation that helps us understand, explain, and predict how the world works. It is a more complex, dynamic representation of how something works in the world and is used for reasoning, decision-making, and predicting outcomes. Mental models are formed through experience, learning, and reflection.

The ‘traditional’ example of a schema is the schema many of us have for ‘restaurant visits’. You expect to be seated by an employee, see a menu, order food, be served, eat, and pay the bill.


In most restaurants that you encounter, this schema helps you navigate the situation without thinking. However parts of this schema can go awry in a different setting. In the Netherlands, for example, taxes and tip are included in the menu price and a tip is given as something extra for good service; what you see is what you pay. After having lived in the Netherlands for years, I went back to the United States with this schema firmly entrenched in my head and the first time I went out to eat in a restaurant was rudely awakened because the menu prices there are without added sales taxes (state, city) and it’s unheard of not to give a tip which can be up to 25%. What I saw on the menu was definitely not what I paid (140% of the menu price) on the final bill.

Schemas are nested or layered. Our schema of a dog as a house pet includes expectations such as that dogs live with humans, are generally friendly, bark, wag their tails, can have fleas, and enjoy play. They need daily care—food, water, walks, and attention—and form close bonds with their owners (some say caretakers). This schema allows us to instantly interpret what we see when a dog chases a stick or curls up on a couch.

We don’t perceive these actions as isolated facts but as coherent parts of the familiar dog-as-pet framework. However, our mental framework for dogs is not limited to pets. We also understand dogs within the broader category of mammals. This schema includes knowledge such as: dogs have fur, are warm-blooded (endothermic), give birth to live young, have mammary glands to nurse their offspring, have a backbone (are vertebrates), have three middle ear bones, have lungs for breathing… The mammal schema links dogs to other animals like cats, whales, bats, mice, or humans. It also helps us reason about biological similarities with other mammals, such as how dogs grow, breathe, have puppies, or need sleep. And at an even more general level, we recognise dogs as animals. This schema captures shared features of all animals, such as that they are multicellular, move, consume food as they are what is known as heterotrophic (cannot make their own food), respond to stimuli, and reproduce. It helps us distinguish living beings from non-living things and informs our reasoning in contexts such as ecology, ethics, or biology.

A helpful, and also traditional example of a mental model is how we understand and use a thermostat. Most people have a simple mental model of how a thermostat works: you set the temperature, and the heating (or cooling) system turns on or off to maintain that temperature. This model helps us anticipate what will happen when we adjust the dial or press a button. If the room is cold and you increase the setting, you expect the heater to start. Even without knowing the technical details (e.g., feedback loops or hysteresis), the mental model supports practical reasoning and prediction.

Mental models are also nested or layered. At the most general level, it might be “The thermostat keeps my house at a comfortable temperature”. Turning the dial up makes your house warmer, turning it down cooler. But beneath this surface-level model are several more specific mental models that are nested within it. One such nested model might be: “The thermostat measures the room temperature and compares it to the set target temperature”. This explains why the system turns on or off seemingly on its own. A further nested model might explain the heating system itself: “When the thermostat signals that the temperature is too low, it activates the boiler or furnace, which heats air or water and distributes it through the house”. And so forth.

Just as with schemas, mental models are often incomplete and fallible. They’re simplifications which are often useful, but not always accurate. People frequently act on flawed models without even realising it. Take the common belief that setting a thermostat or oven to a higher temperature will make the room or food heat up faster. Many assume that if they’re cold and want the room to warm quickly, setting the thermostat to 30°C (86°F) will speed things up—even if they only want it to be 21°C (70°F). But thermostats don’t work like accelerators; they simply signal the system to turn on or off based on the setpoint. Once ON, the heating rate stays the same regardless of the target temperature. The belief reflects a faulty mental model in which the thermostat is imagined to control intensity, not just the threshold.

Also, just like schemas, mental models are often nested or layered. A mental model of a thermostat might fit within a broader model of a heating system, which in turn is part of a larger model of energy use in a household. In a technical or engineering context, experts may activate deeper, more detailed models involving sensors, relays, or proportional–integral–derivative (PID) control systems. In daily life, however, a basic cause-and-effect understanding is often sufficient.

Closer to student learning, these schemas and models can lead to systematic misinterpretation and problems. A model-based misconception is that “motion requires force”. Many people form a naïve mental model (folk physics; biologically primary learning based on play and trial-and-error) where motion requires a continuous force—like pushing a shopping cart or throwing/hitting a ball. This leads to the misconception that when the force stops, the object should stop moving. This contradicts Newton’s first law (inertia), which states that objects in motion stay in motion unless acted upon by another force (like friction). When the thrown ball continues to fly, this naïve model leads students to think that a ball continues to move because something keeps pushing it (a force), even after it’s released. They do not realise that the only forces working on the ball at that moment are friction slowing it down and gravity drawing it down.

A schema-based misconception is when a child develops a schema that all birds can fly. This schema is based on their repeated exposure to flying birds in books and daily life. This schema can lead to the misconception that animals like penguins or ostriches are not “real” birds, because they do not fly; they do not fit the expected features of the schema.

Such errors highlight how both schemas and mental models are influenced by surface appearances and analogies from unrelated systems (e.g., mistaking a thermostat for a gas pedal or that since most birds that we encounter can fly, unless we live in Antarctica, that all birds can fly or that a bat is a bird since it can fly). These misunderstandings often go unnoticed until challenged, and can persist even in otherwise competent adults. Without corrective feedback or instruction, people may continue to rely on these intuitive but misleading representations.

In short, a mental model-based misconception arises from faulty reasoning about how a system works, while the schema-based misconception arises from overgeneralising patterns in categorisation. Instruction alone—no matter how clear—can fail if the explanation is immediately interpreted through that faulty schema or model.

Both schemas and mental models develop and refine with experience. A child’s schema of a dog may begin with them being small, have high pitched barks and have long hair as in the case of all four of my long-haired chihuahuas. This schema continues to grow as the child encounters other dogs that are being walked in the park but that are much bigger, have a deep-throated bark and have short or no hair like a greyhound. The same is true for a child’s model of how a car moves. It might begin as “the driver pushes a pedal and the car goes”. Over time, this model evolves to include gears, fuel, engines, and eventually electric motors and regenerative braking. Through observation, explanation, and feedback, the model becomes more precise and more powerful. Education plays a key role in this: teaching doesn’t just transfer facts—it reshapes and extends students’ mental models of how the world works.

Here, Piaget’s concepts of assimilation and accommodation are relevant. When we learn that some thermostats use machine learning to anticipate heating needs, we may assimilate this into their existing model such as an oven. But if this new information clashes with how we think thermostats work, we need to accommodate, revising our model to include this new information. In this way, mental models and cognitive schemas are flexible, dynamic, and deeply tied to learning.


P.S. While looking for figures to use in this blog I typed into Google “Cognitive schema vs. Mental model”. Google’s new AI-assistant first came up with this:

Facebook
Twitter
LinkedIn
Email
WhatsApp