Last month, I posted Testing Rules, which focused on tests that are conducted as part of software and system development. Coincidentally, I recently found myself engaged in a discussion about whether or not testing has real value in a learning environment. I will not repeat the idiocy I heard, but I will say that the education 'professionals' doing the talking were pretty much clueless about instructional testing, and were mostly regurgitating their juvenile versions of the liberal-feel-good idea that self-esteem is more important than competence—that how people feel about themselves is ultimately more important than their ability to perform a task or demonstrate knowledge. That's entirely nonsense of course[1], but none of those people are actually teaching right now, so the combined negative effects of their collective ignorance is minimal. On the other hand, they are all graduates of typical U.S. education-degree programs, which means that a whole bunch of people just like them are working in schools and colleges all over the country.[2] It's painful to contemplate.

     As for the effects of testing, in 35+ years of taking, developing, or administering almost every kind of skill or knowledge test, I've never known anyone to be harmed by a test, nor to suffer overmuch at failing one. In fact, quite the opposite has been true: Passing a test almost always demonstrates competence, which always bolsters self-esteem and confidence, while self-esteem (by itself) cannot ever make a person more competent. Conversely, incompetence engenders fear and loathing both in and of the inept person, and no amount of feel-good affirmation will change that.

     The difficulty, of course, is knowing what kind of test to use and to what end. It's fairly clear to me, but I have been at this a while, so once again I've decided to post an updated version of something I wrote in 2002 as a learning aid of sorts[3]. Like Testing Rules, this is an aggregation of testing-related rules and definitions, though it focuses on instructional testing instead of software and system tests. For many people, this stuff will not be interesting or useful, but if you are an instructional designer, a teacher-trainer, an HR specialist, or a supervisor, you need to be able to prove people can do what they have been taught or claim to be able to do, and aside from credible third-party testimony, a test of some kind is usually the only way to do that.
 


Instructional Testing
 

     Most educational activities conclude with or include an assessment to determine whether and to what extent the goals of the activity have been achieved. Most assessments take the form of a test (vice observation or analysis) in which the student is required to demonstrate his or her skill, knowledge, intelligence, or ability. Typically, tests comprise a series of questions, problems, or physical stimulations designed to elicit certain responses; the degree to which a person responds correctly is the measure of success or failure. The type, timing, and format of the test given depends on the educational venue, the type of material being taught, and the educational objectives.

Rules of [Instructional] Testing

There are really only two 'rules' that apply primarily to instructional testing. They are:

1. Instruction is typically developed in a linear way, i.e., overall course objectives are divided into instructional objectives organized into phases, modules, and lessons. Test items are developed after the instructional objectives are formalized, and before the lesson plans. This ensures lessons focus on what's important and makes the best use of time and resources.

2. Whatever is not going to be tested should not be taught. This doesn't mean every single thing taught has to be tested, nor that instruction must be boring. It means that instructional time is limited, so everything taught must support the learning objectives. Instructors who digress without purpose or try to teach everything they know about a subject do students a disservice. Likewise, instruction that is not linked to specific objectives is often unfocused and confusing. Essentially, if something isn't worth testing, the time should be spent on things that are.


Test Terms

•  Criterion-Referenced Test. A test that measures the extent to which examinees perform in relation to specific established standards; the performance of other examinees is of no consequence. See Norm-Referenced Test.

•  Norm-Referenced Test. A test that classifies students in relation to the performance of other students. A norm-referenced test is "designed to highlight achievement differences between and among students to produce a dependable rank order of students across a continuum of achievement from high achievers to low achievers." (Stiggins, 1994)

•  Objective Test. A test which is scored based on absolute standards, with no latitude given for interpretation, intention, test item criticality, etc.

•  Subjective Test. A test which is scored based on relative standards, with latitude given to the scorer to account for interpretation, intention, test item criticality, etc.

•  Test Compromise. The unauthorized disclosure of a test or of test items, with the likelihood that future examinees will benefit from the disclosure, thereby distorting the results of the test.

•  Test Fidelity. The degree to which a test resembles the actual task being evaluated. The greater the resemblance, the higher the test fidelity. Performance tests using real equipment in actual operating conditions have the highest fidelity. Some people now refer to this as authenticity.

•  Test Constraints. Limitations on time, money, personnel, facilities, or other resources available to use for testing.

•  Test Item. A question, problem, or physical stimulation designed to determine the truth or degree of an individual’s knowledge, skill, ability, or attitude.

•  Test Item Analysis. The process of evaluating a test item to determine its relative difficulty value, its correlation with some criterion of measurement, and the extent to which it correctly categorizes examinees.

•  Test Item Identifier. A unique combination of numbers, letters, or special characters that designate a specific test item.

•  Test Reliability. The degree to which a test or test item gives consistent results.

•  Test Security. The process of ensuring that tests are not compromised.

•  Test Validity. The extent to which a test measures what it was designed to measure.

•  Testing Out. The process of allowing a student to take a course test (phase, lesson, etc.) to determine if he or she must attend that course or class. A student who passes this test is given full credit for completion of the lessons covered by the test. This is not a pretest.


Test Types by Purpose

•  Achievement Test. A test that measures the extent to which an individual has mastered the specific knowledge or skills taught during a preceding period of instruction.

•  Aptitude Test. A test that measures a person's inherent ability to perform a particular type of task in the given environment. The goal of an aptitude test is to determine in advance whether or not a person will be able to successfully complete a training course or succeed in a career field.

•  Check-on-Learning. A check-on-learning is not a test, per se, but a short unscored quiz given at the end of each major teaching point to assess student progress.

•  Comparative Test. A test used to determine how an individual or a group performs in comparison to other individuals or groups. Comparative testing is usually conducted to measure a test's validity or reliability; thus, for example, several iterations of an aptitude test might be compared.

•  Diagnostic Test. A test that measures performance against a criterion, used specifically to identify areas of weakness or strength in an individual's knowledge or skills.

•  End-of-Course Test. A test administered prior to graduation to determine if students can perform all tasks they were taught during the course, or can demonstrate a satisfactory mastery of the material covered. Graduation is typically linked to a passing score on this test.

•  Entry Skills Test. A test that determines if a student possesses knowledge or skills needed as a prerequisite to new instruction.

•  Field Test. A field test is a 'tryout' of a lesson or course on a representative sample of the target student target population. A field test is conducted to gather data about the effectiveness of instruction for the purpose of refining and validating the course.

•  Knowledge Test. A test that measures an individual's degree of knowledge about a specific topic or subject. A knowledge test is similar to an achievement test, but is used to determine base-levels of understanding without regard to how or when the knowledge was obtained.

•  Performance Test. A test that measures a student's psychomotor and cognitive capabilities in terms of actual performance. Although similar to a proficiency test, a performance test is usually understood to require a skills demonstration in conditions substantially similar to actual operational conditions. Performance tests are typically objectively-scored criterion referenced tests. See Proficiency Test.

•  Phase Test. A test given at the end of each major phase of a course to determine if a student may advance to the next phase or level of the course. A phase test is usually employed during very long courses as interim, or mid-point 'end-of-course' tests. For example, in a 47 week course, a phase test might be given every 6 weeks.

•  Post-Test. A 'data-collection' test administered upon completion of a course or a unit of instruction to determine the extent to which students can perform the tasks they were taught, or can demonstrate satisfactory mastery of the material covered. The major difference between this and an end-of-course test is that graduation is NOT linked to a passing score on this test. Typically, post-test results are compared with pretest results to assess the extent of student improvement and to identify areas of weakness or strength in the course. See Pretest.

•  Pretest. A 'data-collection' test administered prior to the start of a course or a unit of instruction to determine the extent to which students can perform the tasks they will be taught, or can demonstrate mastery of the material that will be covered. The major difference between this and an entry skills test or an aptitude test is that acceptance into the course is NOT linked to a passing score on this test. Typically, pretest results are compared with post-test results to identify areas of weakness and strength in the course. See Post-test.

•  Proficiency Test. A test that measures a student's psycho-motor and cognitive capabilities in terms of a job. Although similar to a performance test, a proficiency test is understood to be a comprehensive procedure used to determine a student's ability to do a job. Proficiency tests are typically subjectively-scored criterion referenced tests. See Performance Test.

•  Progress Test. A scored test administered during a course to assess student progress. In mid-length courses, a progress test is sometimes used as phase test to determine if a student should advance to the next phase or level. Scores are recorded for comparison and analysis. See Phase Test and Check-on-Learning.

•  Qualification Test. Any test used to certify that an individual is able to perform a specific set of tasks in accordance with an established minimum standard.


Test Types by Format

•  Heuristic Test. A test that presents problem-solving simulations which emulate the actual operating environment. Also called discovery tests, these tests present the student with stimulus information that is inadequate, incomplete, ambiguous, or irrelevant to the simulated environment; the student is required to synthesize information and apply knowledge to solve the problems presented.

•  Multiple-Choice Test. A test in which the examinee is required to select the most correct answer(s) from among several choices for each test item.

•  Nonverbal Test. A test that requires little or no speaking, reading, or understanding of language by the examinee, in regards to either directions or responses. Directions and responses may be given pictorially or in pantomime. Also called a 'non-language' test.

•  Oral Test . Any test involving the use of spoken words to give directions, make observations, ask questions, or respond. Also called a 'verbal' test.

•  Power Test. A test in which items are usually arranged in order of increasing difficulty and in which examinees are given all the time they need to complete as many items as they possibly can. A power test measures extent of knowledge and skill mastery.

•  Speed Test. A test in which the time limit is intentionally set so that almost no one can finish all the items or tasks in the test. A speed test measures memory and manipulation skills.

•  True-False Test. A test in which the examinee is required to select the most correct answer from alternate, opposite responses, i.e., True/False, Yes/No, or On/Off.

•  Written Test. A test in which an individual writes answers to test item questions.

 

Notes:

1. Regardless of what anyone believes or cares about, the United States is absolutely engaged in ruthless cutthroat competition with all other nations for unfettered access to resources and wealth. Many Americans may not realize it or even believe it when they are told, but the leaders and people of other nations know it, especially in developing countries like China, Korea, Indonesia, India, Nigeria, Brazil, and so on. The U.S. is a powerful and world-leading nation, and it will remain so for some time, but other nations are nipping at our heels and are even overtaking us in some areas, e.g., in the development of fast supercomputers (China, Nov 2010).

   If we want to remain competitive and on at least an equal par with other nations (let alone retain our leadership role), it is imperative that we educate our children (and, by the way, make more of them!) so they can compete and win in a global arena. We must also challenge them, repeatedly and at progressively increasing levels of difficulty. Children whose feelings are hurt because they fail a test, and are allowed to wallow in self-indulgent pity-parties are going to be mighty surprised when they find they can't get decent jobs anywhere in the world because they are undereducated. Adults who do not insist on hard subjects and challenging tests for their children are more worried about making themselves feel good than they are about helping their children, and they are consigning their children and their country to second rate status.

2. I have an MA Ed, and I've worked extensively with public and private school teachers, university instructors, and professional trainers. In my experience, most people who teach are good at it; however, not all of them are by a long shot, and more important, many teachers (regardless of their abilities) do not like giving tests. Were it left up to them, I suspect most would eschew testing altogether. Why? Because teaching doesn't require or guarantee learning, and many teachers do not want to be held to account for students' learning—they figure that if they teach, they've done their jobs and that's all that matters. I know this doesn't apply to all teachers, but fortunately, the decision has been taken out of their hands in most states now and we're all better off for it.

3. Instructional Testing was originally published by me on 10 Sep 02, as an intraoffice information paper. As with Testing Rules, some of it was my work, and all of it was edited by me in some way, but most of this came from a number of sources, none of which I could identify even when I wrote it. Also, as with Testing Rules, while I retain whatever rights are legitimately mine (especially for the new material), I can only claim editorial credit for most of Instructional Testing.
  

SangerMPermalinkBusiness Practices[Back to Top]