In the wise words of Marshall Goldsmith, "For everything you start, you need to stop something." - when we are embarking a new journey, we need to throw out some old ballast. One of the biggest burdens we carry around are our own mental models, shaping our perception of reality and therefore, our thoughts, behaviours and actions. When was the last time you did some housecleaning for your own mental model?
How our mental model affects us
Everyone builds a mental model based on at least three assumptions:
- Reality exists
- We form a model of reality through interaction and observation
- Models with predictive capability are better than those without
From that starting point, we are building everything we consider "real". Probably the most noteworthy assumption is #2. It indicates that during every single second of consciousness, we are shaping our model of reality.
Our mental model of reality has assumed it's current shape from the second we were born until today. Each aspect and angle of this model is based on observations, interaction and deduction.
Our choice of action is then determined by the outcome we predict based on our own model.
The problem with our mental model
"All models are wrong, some are more useful than others" - another quote I picked up somewhere.
We do not know reality. We can't. We only can form a "pretty likely model of reality" - and that model
only exists in our mind! The shape of our mental model is determined by the interactions we had - and the observations we made. Since we are neither omniscient nor omnipotent, we didn't have some important interactions and haven't made some important observations - or have misinterpreted some observations.
This means our mental model of reality usually suffers from three key drawbacks:
- Incompleteness
- Inconsistency
- Incongruence
Incompleteness means that there are events beyond our comprehension.
For example: I don't understand
why there are black swans in Australia. I have never bothered to learn how this came to be, so I couldn't explain why Swans can be white or black, but not green.
Inconsistence means that if we scrutinously considered everything we know, we would realize that multiple things we assume as "true" individually can't be "true" together.
For example: I
consider Tim to be a nice person, and I am aware that Tim is not nice to Alice - so what is it then? Is Tim nice - or not?
Incongruence means that different people models of reality may either fail to overlap (I know what you don't) or mismatch (I think this is true, you think it's false).
For example: The UKIP supporters think it's good to leave the EU, while the EU proponents think that's a terrible idea. Either party drew their conclusion based on a number of assumptions and facts that may either be unknown, weighted differently or dismissed by the other party.
Mental model housekeeping
To do some proper housekeeping, we need to be aware of the following:
1. Our mental models are just that - models.
2. We benefit from having a more accurate model.
3. Incongruent models can be aligned through open interaction with other people.
Now, let us discuss some methods for doing this housekeeping:
Aligning concepts
We have so many inconsistent concepts, just like the one above.
Once we become aware
where these inconsistencies lie,
we can uncover the reason
why we
have these concepts.
Next, we formulate a hypothesis for the conflict, then design an experiment to
disprove at least one of the concepts.
It could be that we failed to disprove
any of them - in which case, we probably haven't dug deeply enough and need a better hypothesis.
It could be that we managed to disprove
all of them - in which case, we may need to forget everything leading us to either conclusion.
If we disproved
all but one of them, the best way forward is to discard the ideas that no longer hold true. Especially in this case, it could be that even what we believe now is
still wrong: We just don't know until we have more information.
How do I align concepts - in practice?
It's quite simple. When I discover that I have conflicting ideas, I mentally rephrase "
Tim hates me." and "
Tim is a friendly person" into "
I assume Tim hates me". Then, I ask myself, "
Why would Tim hate me?" - then I may go to Tim, and be quite upfront: "
I feel we don't get along very well.". Tim might meet that with an unexpectedly friendly: "
What can I do so you feel more comfortable?" - my first assumption is already invalidated. My model is more consistent now.
Pruning loose ends
We are bound by so many concepts that arise seemingly without reason.
For example, Tim said something bad to me yesterday - and now I have the concept "Tim doesn't like me". My concept is not founded on a sufficient amount of evidence.
This concept now binds my interactions with Tim, even though it is merely a loose end in my mental model. The more loose ends I carry around, the less freedom I have in my interactions with my environment.
Through introspection, we might drill into the "Why" of picking up this loose end and tying it to our model. In our attempts to do this, we complicate our model by adding numerous assumptions without any foundational evidence.
We need to become aware of what our "loose ends" are - and consciously discard such concepts.
This helps us form a more consistent model of reality.
This approach is based on Occam's Razor, the suggestion that "
The model relying on the fewest assumptions is often the best"
How do I prune loose ends - in practice?
Tim might actually have said to me "Dude, you messed that one up." I can now integrate that sentence into my model right away, filling the missing gaps with unspoken assumptions, one of which may be "Tim doesn't like me". I can also choose to simply say "Yup", and regardless of whether I agree with Tim or not, I simply don't attribute these words to my understanding of Tim's relationship with me.
In retrospect, I may need to be aware that "Tim hates me" and question myself, "How much evidence does support this concept?" - unless the evidence is already overwhelming, the easiest thing may be to simply go to Tim and say, "Want to have a chat?", seeing if that chat generates evidence to the contrary.
Probably the hardest way of pruning loose ends is to drop the concept as it pops up. Since our concepts are hardwired in our brain, pruning like this becomes a difficult exercise of psychological intervention: becoming aware of the dubious concept, then redirecting thoughts into a different direction when the concept manifest. This method does not resolve the underlying inconsistency and is therefore unhelpful.
Resolving dissonance
My concepts often don't match your concepts, because neither my experience nor my reasoning process is the same as yours.
The "easy way" to resolve dissonance is war - just get rid the person who doesn't agree with you. Unfortunately, that doesn't mean that your model of reality got any better.
When our own strive is to obtain the best possible model, we need to attune our model based on others' ideas and reasoning.
First, we need to expose ourselves to others' thoughts.
Then, we need to discover where our thoughts mismatch those of others.
Next, we try to uncover which assumptions lead to the mismatch.
Together, we can then form a hypothesis of which assumptions are more likely.
Then, we can begin
aligning concepts together, coming up with a shared model that is more congruent.
Resolving dissonance requires two additional key assumptions:
1. It could be that my model is wrong.
2. I can find out enough about other models to integrate a portion into my own model.
How do I resolve dissonance - in practice?
Nothing easier - and nothing nothing harder than this. Just talk. Unbiased.
Have an open conversation without predetermined outcome.
Punching holes
We typically assume that what we know and observe is true. Then, we build new assumptions on that. Very rarely do we spend time trying to disprove what we know.
The Scientific Method is based on the idea that we can't prove anything to be true, but we can prove something to be
not true. We consider as "probably a good explanation" by exclusion, i.e. when every experiment to prove that the opposite failed. So, our goal should be to come up with an experiment to
prove us wrong.
We can improve our mental model by using this approach to try and punch holes into our model.
If we succeed - our model is bad and we can discard the assumptions we just invalidated.
If we don't succeed - it still doesn't mean our model is "right", it only means that it's the best we have for the time being.
How do I punch holes - in practice?
When my model assumes "
Tim is unfriendly", the most effective way to punch holes is creating situations where I am exposed to Tim in settings which minimize the likelihood for him to be unfriendly.
Summary
Frequent clearing our mental model is very helpful in improving our understanding of the world around us - and our interactions with others.
The exercise of cleaning
always requires the following:
1. Being consciously aware of our assumptions.
2. Doing something about it.
3. Never being content with our current understanding.
Simply starting is the best way.