Skip to Main Content

Apply Now to the University of Wyoming apply now


Students self-assess their knowledge, ability and thinking within many of the active learning modalities. For example, in writing their Muddiest Point, students are actively engaged in metacognition. They are considering which aspects of learning were unclear or confusing and explaining the perceived root of confusion.

An active learning modality that enables course-long self-assessment is called a Knowledge Survey. Our LAMP team has used Knowledge Surveys extensively. In the video below and in the narrative that follows, you will learn how to design and impliment these surveys. 

Creating a Knowledge Survey for your Course

A Narrative by Rachel

Knowledge surveys are a rich way to monitor your class’s metacognitive growth over the course of a semester. These surveys are generally administered in the first week of the semester and then again in the final week. I took an interest in knowledge surveys about eight years ago when I attended a conference for undergraduate microbiology educators. One of the posters cited Ed Nuhfer’s (2003) work on Knowledge Surveys. The Knowledge Survey reported on in this work was 200-items in length. This length is possible with knowledge surveys that do not ask students to actually answer the question but instead to rate their confidence (self-assessed competence) on a 3-point scale:

  • A (or 2). I can fully address this item now for graded test purposes.

  • B (or 1). I have partial knowledge that permits me to address at least 50% of this item.

  • C (or 0). I am not yet able to address this item adequately for graded test purposes.

When using an in-depth (i.e. 200-item) knowledge survey, one can ask questions that represent every learning outcome for a semester-long class. In fact, this style of knowledge survey is basically a way of converting your learning outcomes into an interactive ‘game’ for students. It allows students to have a complete preview of what they can learn during the semester and how you expect them to show their knowledge. At the end of the semester, taking the knowledge survey again allows them to appreciate how much they have learned / their growth. The questions on the knowledge survey can assess concepts, skills or processes. Questions can be categorized by Bloom Level.

As you begin to consider creating a knowledge survey like this, you are likely thinking about how this will promote your own reflective practice. It requires us, as instructors, to carefully outline all of our outcomes and then convert them to questions that students will be able to interpret well enough to assess their own competence. Indeed, one of the great benefits of a knowledge survey is in promoting your own organization / course preparation (Nuhfer 2003). 

When I began implementing knowledge surveys within the microbiology program, one of the first questions that my colleagues asked was, “But does confidence (self-assessed competence) really correlate with knowledge?”. This question spurred a study in which we asked students to both answer the question and indicate their confidence (self-assessed competence) in their answer to this question. This resulted in my first publication on knowledge surveys and was the reason that I met Ed! Now, along with Ed, Kali Nicholas and many other phenomenal colleagues, we have continually shown that most individuals are adequate self-assessors and that collectively people (across demographics) are good at self-assessment! It has been an honor to be a part of this knowledge survey team because together we have called into question the famous “Dunning-Kruger Effect” that labels people as unskilled and unaware. In our most recent publication in the journal Numeracy, we use paired measures of competence and self-assessed competence to help us better understand privilege. You may also enjoy the article by Ed in Improving with Metacognition. He speaks about using self-assessment to assess higher order thinking.  

In our later iterations of knowledge surveys, we used surveys that were much shorter than 200 questions. In some cases, we have tied the knowledge survey concept to concept inventories. The KSSLCI is a 25-question concept inventory that also measures self-assessed competence. We have also found that asking students to answer all questions first and then predict their overall performance is an effective strategy. This is called a global postdicted self-assessment and contrasts with the granular self-assessments in which students assess their competence after answering each item. No matter whether self-assessments are granular or global, we see positive and significant relationships between self-assessed competence and actual competence (Favazzo, Willford, and Watson 2014; Nuhfer et al. 2016a; 2017; Watson et al. 2019).

The most recent knowledge survey that I created for my Biological Chemistry course was 35 questions. I decided to administer the knowledge survey in class as this allowed me to indicate to my students how much I valued it. I also decided to only ask for their confidence ratings and not for full answers to the questions. This permitted them to complete the KS during class. Consider working to begin the creation of a knowledge survey for your course. First, determine the style of KS that you would like to employ. The example below will be helpful in this process.

Begin with a learning outcome

In developing a knowledge survey, your first step is to compile your granular learning outcomes. Below is an example learning outcome for a Microbiology class:

Given a pre-party vodcast and a facilitated model building session students will be able to draw a Gram-positive bacterial cell wall in which the peptidoglycan, teichoic acids and cytoplasmic membrane are accurately depicted and labeled.

Notice that a good learning outcome expresses conditions (what resources the students will be given or generally have). In the above example, the conditions are, “Given a pre-party vodcast and a facilitated model building”. It also has a verb (what the students will be able to do). In this example, the verb is “draw”. Finally, it has standards (to what level will the students be able to do this?). In this example, the standards are in which the peptidoglycan, teichoic acids and cytoplasmic membrane are accurately depicted and labeled.

Convert the learning outcome into a knowledge survey question

For the outcome above, a knowledge survey question might state:

I can draw a Gram-positive bacterial cell wall and label the peptidoglycan, teichoic acids and cytoplasmic membrane.

Determine the Bloom level

Bloom Level

Sample query sound or verb nature

Sample question


define, list, state, answer who? or what?

List the twenty standard amino acids.


recognize, classify, translate, interpret, paraphrase, explain, predict or give an example

Use your own words to explain the following passage: Mycorrhizal mutualism enables nitrogen-fixation.


solve, demonstrate, write, draw, calculate or interpret

If a yeast cell, in an aerobic culture completely catabolized 4.5 X 10^9 molecules of glucose, determine the maximum number ATP molecules that could be synthesized via both substrate-level and oxidative phosphorylation.


compare, contrast, differentiate

Compare HIV and SARS-CoV-2 with respect to route of entry , capsid type, genome, route of transmission and exit strategy.


design, construct, develop, build

Design and draw a plasmid that incorporates lac operon and would allow a researcher to visibly determine whether the genes under control of the operon are being expressed.


justify, support, defend

If you were a physician hoping to treat a case of "walking pneumonia" caused by Mycoplasma pneumoniae, which antibiotic would you prescribe? Defend your choice based on bacterial cell wall structure and antibiotic target site.

In this LAMP Coffee & Curriculum Presentation, McKensie Phillips (2019-2020 LAMP Fellow and 2020-2021 LAMP Educator's Learning Community member) describes her research incorporating knowledge surveys more frequently throughout the semester. McKensie's findings indicate that students grow in their self-assessed competence between post-unit and post-semester knowledge surveys!!

References and Resources

Favazzo, Lacey, John D. Willford, and Rachel M. Watson. 2014. “Correlating Student Knowledge and Confidence Using a Graded Knowledge Survey to Assess Student Learning in a General Microbiology Classroom.” Journal of Microbiology & Biology Education 15 (2): 251-258. This paper offers an example of using knowledge surveys at the programmatic level.

Dunning, David. 2011. “The Dunning–Kruger Effect: On Being Ignorant of One’s Own Ignorance.” Advances in Experimental Social Psychology 44: 247-296. The Dunning Kruger Effect was posited in 1999. The graphs and numerical analyses were shown to be flawed by Nuhfer and others in 2016 and 2017 (see papers below).

 Nuhfer, Edward. 2010. Knowledge Surveys. This links to a learning object site that provides a series of brief video tutorials created by Nuhfer for California State Universities' Merlot site. The project was funded as part of a U.S. DOE grant and provides a good introduction for constructing and interpreting knowledge surveys.

Nuhfer, Ed and Delores Knipp. 2003. The knowledge survey: a tool for all reasons. To Improve the Academy 21:59–78.  This reference ties employment of knowledge surveys to promotion of student learning. That was the main inspiration to create the instrument. It gives students a way to be mindfully reflective about the content and their own internal mastery of it. It is a way to align affective feelings with cognitive competence. The degree to which affective self-assessments and cognitive test scores correlated and could be used for assessment was not considered relevant .

Nuhfer, Edward, Christopher Cogan, Steven Fleisher, Eric Gaze, and Karl Wirth. 2016 (Nuhfer et al. 2016a). “Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency.” Numeracy 9 (1): Article 4: 1-24. First paper peer-reviewed by mathematicians to question the numeracy in use by psychologists for two decades.

Nuhfer, Edward, Steven Fleisher, Christopher Cogan, Karl Wirth, and Eric Gaze. 2017. “How Random Noise and a Graphical Convention Subverted Behavioral Scientists’ Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives.” Numeracy 10 (1): Article 4: 1-31. Addresses the consequences of two decades of using flawed mathematics to characterize human behavior.

Overbaugh, Richard. C., and Lynn Schultz. n.d. Bloom’s Taxonomy. [Online.] Accessed January 2018 at

Watson, Rachel M., Edward Nuhfer, Kali Nicholas Moon, Steven Fleisher, Paul Walter, Karl Wirth, Christopher Cogan, Ami Wangeline, and Eric Gaze. "Paired Measures of Competence and Confidence Illuminate Impacts of Privilege on College Students." Numeracy 12, Iss. 2 (2019): Article 2. DOI:   

1000 E. University Ave. Laramie, WY 82071
UW Operators (307) 766-1121 | Contact Us | Download Adobe Reader

Accreditation | Virtual Tour | Emergency Preparedness | Employment at UW | Privacy Policy | Harassment & Discrimination | Accessibility Accessibility information icon