I had a chance to read a psychology study published in 1999 done by Justin Kruger and David Dunning. The paper won an Ig Nobel Prize in 2000, entitled “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments“. Here is what the abstract says.
People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability.
I think this is a very great exploratory study. It is fascinating to see how people think that we are “above average”. Anyway, I still have a question whether culture has some effects on the finding. I would like to see the replication of this study on other area of the world, especially in Asia where the traditional humbleness highly controls over “self-confidence”. It may also depend on tasks, goals, experiences, and such.
Thinking about information and library science, this study has some extent that could be utilized and led some discussions in ILS area.
- One of the questions mentioned in the paper is when (and how) do people realize they are unskilled? I think this is a good beginning point of tracking down to the cause of many problems mostly in information literacy area. We might be benefit from the discovery of this type of study. Many people might realize cognitively when they interact with fellows, objects or systems. It could be either passive or active process.
- On the contrary and further thought, I think about people who are overly humble or have less self-efficacy. The paper also proves that those “in the top quartile tend to underestimate their ability and test performance relative to their peers” (p.1131) which can be explained by a “false consensus effect” (When you did well, you think others did well too). There are many people walking to the library to ask librarians just for making sure whether or not what they have done is appropriate. Again such a question as “when (and how) do people realize they are information skilled?” also need some attention too. The applicability of this type of research can also contribute to information literacy. It could also contribute to human-information interaction area; for example, help system design, system guidance, etc.
- One of the most interesting parts (for me) is about the “above-average effect” where one believes that s/he is above average. The paper provides nice examples including business managers tend to think they are more than just typical managers or football players think they are savvier in “football sense” than other teammates. Is it the case of librarians? Do they think they are more able to perform searching than their fellows? The “Incompetence and the Failure of Feedback” has excellent explanation about the issue of “failure” which, I think, can be used to investigate search failure, including:
- people seldom receive negative feedback,
- “some tasks and settings preclude people from receiving self-correcting information”,
- people still need to accurately understand why that failure occurred, even if they got negative feedback, (I personally like that explanation that sometimes you need luck.), and
- “incompetent individuals may be unable to take full advantage of one particular kind of feedback: social comparison.”
- The challenge if one would conduct the replication in ILS is the measurement, since in many cases, the term “success” cannot be scientifically justified. The success of information retrieval/discovery is quite subjective based on personal satisfaction of having enough relevant information in the proper period of time. Sometimes, the answer is not dichotomous whether is yes or no and right or wrong. By having no measurement, it makes life a bit harder to make a comparison between perceived and actual performance.
- One of the things that I learn the most out of this paper is the concern about reliability of self-evaluation studies. I never realized that I need to be careful when I am reading those papers that involve self-evaluation as a single method.
It seems like I have a nice frame here. But wait! Am I overly optimistic and holding miscalibrated views? I guess so. :)
[via Improbable Research]