Skip to content

After three years of development, two years of field testing, and countless hours of creative innovation and hard work, Carrick Enterprises is proud to announce the availability of the Threshold Achievement Test for Information Literacy!

The Threshold Achievement Test for Information Literacy (TATIL) measures student knowledge and dispositions regarding information literacy. The test is inspired by the Association of College and Research Libraries' Framework for Information Literacy for Higher Education and by expectations set by the nation's accrediting agencies. TATIL offers librarians and other educators a better understanding of the information literacy capabilities of their students. These insights inform instructors of improvement areas, guide course instruction, affirm growth following instruction, and prepare students to be successful in learning and life. Each test is made up of a combination of knowledge items and disposition items.

About the Test

The Threshold Achievement Test assesses students' ability to recall and apply their knowledge and their metacognition about core information literacy dispositions that underlies their behaviors. Through this combination of knowledge and dispositional assessment TATIL offers a unique and valuable measure of the complexities of information literacy.

The knowledge items in TATIL are based on information literacy outcomes and performance indicators created by the test developers and advisory board of librarians and other educators. Knowledge items assess an array of cognitive processes that college students develop as they transition from pre-college to college ready to research ready. Mental behaviors tested include understanding (facts, concepts, principles, procedures), problem solving (problem identification, problem definition, analysis, solution proposal), and critical thinking (evaluating, predicting, deductive and inductive thinking). The items are presented in a variety of structured response formats to assess students' information literacy knowledge, skills, and abilities.

Dispositions are at the heart of a student's temperament and play an important role in learning transfer. Dispositions constitute affective facets of information literacy and are essential to students' information literacy outcomes. They indicate students' willingness to consistently apply the skills they have learned in one setting to novel problems in new settings. While some dispositions can be seen as natural tendencies, they may also be cultivated over time through intentionally-designed instruction and through exposure to tacit expectations for student behavior.

To address dispositions in the test, we use scenario-based problem solving items. Students are presented with a scenario describing an ill-defined information literacy challenge related to the content of the module. Following the scenario, students are presented with strategies for addressing the challenge. Students evaluate the usefulness of each strategy.

About the Reports

Threshold Achievement Test reports provide test managers with detailed and robust analyses of student performance. Sections include:

  • Summary results for knowledge and disposition dimensions
  • Detailed results for each knowledge outcome
  • Performance indicator rankings that identify students' relative strengths and weaknesses
  • Performance levels indicators ranging from conditionally ready to college ready to research ready
  • Disposition results with descriptions that align with students' scores
  • Breakouts for subgroups such as first year students or transfer students
  • Cross-institutional comparisons with peer institutions and other institutional groupings
  • Suggestions for targeted readings that can assist in following up on the results

Test managers also receive a set of supporting files:

  • Test Item document. A PDF document with a description of each test item.
  • Raw data file. Contains all of the scores presented in the report.
  • Student data file. Contains scores for every student.
  • Student data codebook. Describes the demographic options that were configured for the test.
  • Student Report zip file. Contains a directory of PDF documents with an analysis of each student's performance.

Test managers have the option to present students with personalized reports upon completing the test. As soon as the student finishes the test a dynamically generated reports is displayed describing the student’s performance and offering recommendations for improvement. The report content is connected directly with the knowledge outcomes, performance indicators, and dispositions of the module being tested.

About the Modules

Two TATIL modules are available now! Two more will come online in 2018. Read brief descriptions below and click on the module titles to see the outcomes, performance indicators, and dispositions. You may also download a PDF document with descriptions for all four modules.

Evaluating Process & Authority (the first module, available now!) focuses on the process of information creation and the constructed and contextual nature of source authority. It assesses how students understand and value authority, how they define their role in evaluating sources, and how they perceive the relative value of different types of sources for common academic needs.

Strategic Searching (the second module, also available now!) focuses on the process of planning, evaluating, and revising searches during strategic exploration. It tests students' ability to recall and apply their knowledge of searching and it tests their metacognition about a core information literacy disposition that underlies their searching behaviors.

Research & Scholarship is the third module and will be available in 2018. The test addresses students' ability to apply the research process to their college work in order to participate in the scholarly conversation and assesses how students understand and value their role within the scholarly community.

The Value of Information (fourth module, coming in 2018) assesses how students understand and value their role within the information ecosystem. It focuses on the norms of academic information creation and the factors that affect access to information. It tests students' ability to recall and apply their knowledge of information rights and responsibilities and it tests their metacognition about core information literacy dispositions that underlie their behaviors.

Learn More

The Threshold Achievement Test for Information Literacy (TATIL) is a unique and valuable tool to add to your assessment program. Explore the Threshold Achievement Test website to learn more about the test, cost and requirements for administering the finished modules, and participating in field testing for the remaining two modules.

Last week I was fortunate to get to attend and present at LOEX 2017, in Lexington, KY.  I’m excited to have joined the LOEX Board of Trustees this year and it was great to see familiar faces and meet new, energized librarians, too.

I presented a one-hour workshop where I walked participants through a comparison of two common types of results reports from large-scale assessments.  We looked at an example of a rubric-based assessment report and a report from the Evaluating Process and Authority module of the Threshold Achievement Test.  We compared them on the criteria of timeliness, specificity, and actionability, and found that rubric results reports from large-scale assessments often lack the specificity that makes it possible to use assessment results to make plans for instructional improvement.  The TATIL results report, on the other hand, offered many ways to identify areas for improvement and to inform conversations about next steps.  Several librarians from institutions that are committed to using rubrics for large-scale assessment said at the end of the session that the decision between rubrics and tests now seemed more complicated than it had before.  Another librarian commented that rubrics seem like a good fit for assessing outcomes in a course, but perhaps are less useful for assessing outcomes across a program or a whole institution.  It was a rich conversation that also highlighted some confusing elements in the TATIL results report that we are looking forward to addressing in the next revision.

Overall, I came away from LOEX feeling excited about the future of instruction in the IL Framework era.  While the Framework remains an enigma for some of us, presenters at LOEX this year found many ways to make practical, useful connections between their work and the five frames. ...continue reading "May Update: Report from LOEX"

Carrick Enterprises has begun to modernize the Project SAILS web site, administrator tools, and reports. This work will continue through the 2017-2018 academic year and will be put into production June 15, 2018. There will be no disruption of service during this work and all existing information will be migrated to the new system.

What’s new:

Peer institution scoring

You will select the tests from your peer institutions to include as a cross-institutional score. This will be reported with all score reporting except your Custom Demographic Questions. You will continue to see cross-institutional scores by institution type, however, you will now be able to include multiple institution types in these scores.

On-demand Cohort report creation

Cohort reports will no longer be restricted to being created at the end of December and the beginning of June. Once you have stopped testing, you will be able to configure your report for production. As long as all of the tests you have included in your peer institution list are completed, your report will be generated overnight and available to you the following day. Your payment will still be required to have been received by us before you can download your report.

Student reports for Individual Scores

You will have the option to display an individualized analysis of your students’ performance when they complete the test. They will have the option to download this report as a PDF document. If you choose to not display this report to your students, you will still receive the reports in your report download.

Detailed narrative report for Individual Scores

In addition to student data, you will receive a narrative report analyzing your students’ performance on the test. This report is something that can be shared with your faculty collaborators and your library administration.

Student activity monitoring

You will be able to monitor in real-time how far along your students are as they take the test. You will see the Student Identifier (which will be called the Student Key), start time, and page number that they are currently answering. You will still be able to download a list of Student Keys that have completed the test. This will continue to include the start time, end time, and number of seconds elapsed for each student.

What’s changing:

...continue reading "Project SAILS Enhancements in the Works"

Dominique Turnbow is the Instructional Design Coordinator at University of California, San Diego Library, and she’s been a TATIL Board member since the beginning of the project in 2014. Dominique has been instrumental in drafting and revising outcomes and performance indicators as well as writing test items. Recently Dominique and her colleague at the University of Oregon, Annie Zeidman-Karpinski, published an article titled “Don’t Use a Hammer When You Need a Screwdriver: How to Use the Right Tools to Create Assessment that Matters” in Communications in Information Literacy. The article introduces Kirkpatrick’s Model of the four levels of assessment, a foundational model in the field of instructional design that has not yet been widely used by librarians.  

The article opens with advice about writing learning outcomes using the ABCD Model. Through our collaboration with Dominique, the ABCD Model provided us with a useful structure when we were developing the performance indicators for the TATIL modules. It is a set of elements to consider when writing outcomes and indicators and the acronym stands for Audience (of learners), Behavior (expected after the intervention), Condition (under which the learners will demonstrate the behavior), and Degree (to which the learners will perform the behavior). This structure helped us to write clear and unambiguous indicators that we used to create effective test questions.

Kirkpatrick’s Model of the four levels of assessment is another useful tool for ensuring that we are operating with a shared understanding of the goals and purpose of our assessments. Dominique and Annie make a strong case for focusing classroom assessments of students’ learning during library instruction on the first two levels: Reaction and Learning. The question to ask at the first level is “How satisfied are learners with the lesson?” The question to ask at the second level is “What have learners learned?” Dominique and Annie offer examples of outcomes statements and assessment instruments at both of these levels, making their article of great practical use to all librarians who teach.

They go on to explain that the third and fourth levels of assessment, according to Kirkpatrick’s Model, are Behavior and Results. Behavior includes what learners can apply in practice. The Results level poses the question “Are learners information literate as a result of their learning and behavior?” As Dominique and Annie point out in their article, this is what “most instructors want to know” because the evidence would support our argument that “an instruction program and our teaching efforts are producing a solid return on investment of time, energy, and resources” (2016, 155). Unfortunately, as Dominique and Annie go on to explain, this level of insight into students’ learning is not possible after one or two instruction sessions.  

To determine if students are information literate requires a comprehensive assessment following years of students’ experiences learning and applying information literacy skills and concepts. In addition to the projects at Carleton College and the University of Washington that Dominique and Annie highlight in their article, Dominique also sees information literacy tests like TATIL and SAILS as key tools for assessing the results of students’ exposure to information literacy throughout college. Having the right tools to achieve your assessment goals increases the power of your claims about the impact and value of your instruction at the same time that it reduces your workload by ensuring you’re focused on the right level of assessment.

If you’re attending ACRL, don’t miss Dominique’s contributed paper on the benefits of creating an instructional design team to meet the needs of a large academic library. She’s presenting with Amanda Roth at 4pm on Thursday, March 24.

We’re excited that this semester all four modules are available for field testing.  Modules 1 and 2 now offer students feedback when they finish the tests.  Modules 3 and 4, still in the first phase of field testing, do not yet provide immediate feedback to students.  But that doesn’t mean that students shouldn’t reflect on their experience taking the test.  When I have students take Module 3: Research & Scholarship and Module 4: The Value of Information, I create an online survey they can complete as soon as they’ve finished the last question.  Setting up the test through www.thresholdachievement.com makes that easy by providing an option for directing students to a URL at the end of the test.  You can view the brief survey that I give students.

When asking for students’ reflections on their experiences, whether for the TATIL modules or for any instructional interaction, I always rely on critical incident questionnaires as my starting point.  Stephen Brookfield, a transformative educator who is an expert in adult learning, has been promoting critical incident questionnaires since the 1990s.  Building upon Dr. Brookfield’s work, faculty have used the instrument to survey students about their experiences in face-to-face classes as well as online.  Read more about his work and the work of his colleagues here: http://www.stephenbrookfield.com/ciq/

If you would prefer to collect information about students’ perceptions of the test content rather than or in addition to their experience taking the test, consider survey questions like:

  • Where did you learn the skills and knowledge that you used on this test?
  • What do you think you should practice doing in order to improve your performance on this test in the future?
  • What were you asked about on this test that surprised you?

By surveying students at the end of the test, you lay the groundwork for class discussions about the challenges the test presented, areas of consensus among your students, and misconceptions that you may want to address.  The test gives students a chance to focus on their information literacy knowledge and beliefs, which they do not always have the time or structure to do.  Writing briefly about their experience taking the test while it is still fresh in their mind will help students to identify the insights they have gained about their information literacy through the process of engaging with the test.