Breaking the Mold:
Technology-Based Science Assessment in the 21st Century

Edys S. Quellmalz and Geneva D. Haertel

Center for Technology in Learning
SRI International

 

The pressure to prepare students for the 21st century is accompanied by demands from educational stakeholders that students must learn to use their minds well and demonstrate competence in challenging subject matter (National Education Goals Panel, 1997). To help achieve these student outcomes, educators have drawn on research from cognitive science and begun to favor classrooms that support students in self-directed learning, nurture in-depth reasoning, facilitate discourse among students and teachers, and encourage students to create products that demonstrate their attainments. Based on the scholarship of cognitive psychologists, such as Vygotsky (1978), Brown (1994), and Resnick and Resnick (1991), these instructional strategies are aligned with constructivist philosophy and pedagogy and are present in many reform efforts. National standards, too, formulate goals embodying student achievement of deep conceptual understanding and effective problem solving and critical thinking. Goals for all students to master challenging content are articulated in the subject matter standards prepared by national professional organizations and by the call for tests designed to measure how well students achieve these new standards. Simultaneously, practitioners involved in educational reforms are citing the need for credible and appropriate methods to assess student growth on world class standards and to document accomplishments of innovative programs.

During the past decade, technology and education have formed a lively partnership that is likely to reform the way learning occurs in our nation's schools. The Panel on Educational Technology of the President's Committee of Advisors on Science and Technology (1997, p. 33) acknowledges this possibility as follows: "the real promise of technology in education lies in its potential to facilitate fundamental, qualitative changes in the nature of teaching and learning." Recent research increasingly supports the proposition that computers and communication resources can move teachers from emphasizing drill and practice to using pedagogies that are more challenging and lead to deeper understandings on the part of students (Becker & Ravitz, in press; Means & Olson, 1997; Pea, 1997). In particular, technology is seen as a strong support to facilitate the use of constructivist reforms in day-to-day instruction. Technology can transform learning environments so that instruction is highly interactive, intensive, rich in content, eclectic in the psychological processes elicited, and customized to a student's needs and interests. Technology can alter the way students learn, the way teachers teach, and the way we assess what students know. One way that technology can support constructivist practices is by enhancing the range of student outcomes and testing methods employed in traditional science assessments.

The changes called for by educational reformers require an amalgam of powerful strategies to transform learning environments to better support student growth. No single innovation can accomplish such pervasive change, but, in recent years, research has documented technology's transformative power. It is beyond the scope of this article to describe the many ways that technology has been shown to affect educational practice. Instead, this article will focus solely on how technology is changing the world of educational testing.

Currently, SRI International is engaged in several research projects that demonstrate how educational assessment in the 21st century can benefit from the affordances of technology. These technology-based projects include an on-line science performance assessment resource library, Performance Assessment Links in Science (PALS); evaluation and classroom assessment projects for the GLOBE program; 21st Century Assessment; and Educational Software Components of Tomorrow (ESCOT). Building on our experience with these projects and a synthesis of research results, this article:

Need for Performance-Based, Technology-Supported Student Assessment

Status of Standards-Based Student Assessments.

Key components of the education reform agenda include high achievement goals for all students, improvement of all parts of the educational system, and coherent policies and practices that reinforce one another (Fuhrman, 1993; Smith & O'Day, 1991). Although education reforms are taking place in the setting of standards and design of professional development, appropriate forms of student assessment are in short supply. Two surveys of states participating in Statewide Systemic Initiatives (SSIs) supported by NSF revealed that only a few states had assessment programs appropriate for testing their science and mathematics reforms (Laguarda, 1998). Studies of school-based reform indicate that local reform efforts are similarly disadvantaged (Quellmalz, Shields, & Knapp, 1995). Standardized, multiple-choice tests are widely criticized as inadequate to measure the critical-thinking and problem-solving strategies targeted in reform programs.

Limitations of Available Student Assessments.

Traditional, on-demand tests still favor breadth of content coverage over depth of reasoning, and it is generally acknowledged that most large-scale assessments are not aligned with reform goals and are too limited to capture the deep conceptual understandings at the heart of education reform. Large-scale assessments tend not to tap the expanded repertoire of student outcomes advocated by reform efforts, such as deep subject matter understanding, inquiry strategies, communication, metacognitive/self monitoring strategies, and collaboration. Few large-scale assessments, even performance assessments, probe in detail the fundamental ways that individuals process and use information in tasks that require extended lines of reasoning (Baxter, Elder, & Glaser, 1996; Quellmalz, 1984). Metacognitive strategies - the self-conscious, deliberate ways that skilled individuals deploy and monitor their reasoning strategies as problems are initiated, attempted, solved, and reflected on - are not tested in large-scale assessments. Some forms of collaboration are embedded in a few performance assessments but are not directly measured. Moreover, as schools invest heavily in technology, they seek measures of the impacts of technologies on student learning and the progress of their students in the fourth literacy: technology literacy. Technology-supported assessments can offer students alternative methods for engaging in performance tasks and for responding to assessment tasks.

The limitations of most currently available assessments are highlighted in analyses revealing their weak alignments with the National Science Education Standards (NSES) and the National Council of Teachers of Mathematics (NCTM) standards. Most notably, many of the NSES Content Standards for Science as Inquiry, such as formulating scientific explanations or communicating scientific understanding, simply cannot be measured by using a multiple-choice format with one correct answer. In the physical sciences, students' understanding of the ways that heat, light, mechanical motion, and electricity behave during nuclear reactions could be readily assessed in a computer simulation, but depicting the complexity of such dynamic interactions is too difficult for a paper-and-pencil, multiple-choice format and too dangerous for hands-on experiments. Likewise, in the life sciences, understanding the multiple ways that cells interact to sustain life, or the passage of hereditary information through processes such as meiosis, gene alteration, or crossover, can be depicted best by using dynamic models that demonstrate the effects of changes at one biological level of life (e.g., DNA, chromosome) on other levels of life. Again, technology-based assessment tasks that take advantage of simulations developed for curriculum programs are well suited to assess knowledge of these highly interrelated systems. Similarly, knowledge of ecosystems, the effects of the rock cycle, patterns of atmospheric movement, and the impacts of natural and biological hazards on species can be more appropriately measured by using technology-based assessments. Such assessments can represent natural or man-made phenomena, systems, substances, or tools that are too large, too small, too dynamic, too complex, or too dangerous to be adequately represented in a paper-and-pencil test or a performance test format.

In addition to permitting assessment of a broader range of scientific phenomena, alternative forms of assessment can overcome the on-demand; one-shot, decontextualized problems typically associated with traditional multiple-choice achievement tests. Curriculum-embedded and portfolio assessments of in-class, extended projects are seen by some reform advocates as methodologies more conducive to providing evidence of student progress (Glaser & Silver, 1994). These researchers make the point that curriculum-linked performance assessments based on theories of knowledge development can make cognitive activity and effort visible, thereby serving as catalysts for constructive teaching by providing opportunities for reasoning to be examined and questioned (Baxter, Elder, & Glaser, 1996).

In sum, the format of traditional, closed-ended science assessments tends to significantly limit assessment of many inquiry-based processes; capture of knowledge of scientific phenomena that are too dynamic, complex, or dangerous; and documentation of the strategies, scientific procedures, and reasoning that students use when engaged in scientific problem solving.

Technology Supports for Student Learning and Assessment.

As technologies revolutionize so many facets of the way we work and live, innovative applications of technology are making increasingly significant contributions to student learning and assessment. We view the exciting innovations that have been developed to support students' acquisition of scientific knowledge and inquiry skills as equally exciting tools for supporting student assessment. We believe that technology can move student assessment beyond annual, on-demand administrations of decontextualized tasks to ongoing conversations and appraisals of learning and accomplishments. Indeed, technology-based assessments can yield rich documentation of students' reasoning processes. As students use networks to collaborate on extended projects, technology can make learning more visible by providing electronic traces of the ideas students consider, the resources students access, and how often and in which ways students access them. Technologies can provide electronic logs of how often students interact with other students and/or experts, thereby supporting examinations of types and quality of collaborations. Teachers and researchers can collect sample excerpts of students' on-line work and electronic conversations. Analyses of these conversations and interactions can provide evidence of how well students are reasoning, how their understanding about subject matter knowledge is deepening, and how effectively they are collaborating with others. Formal assessments of student work can be conducted in on-line rater training and scoring sessions.

Clearly, there is a need to harness the affordances of these powerful technology tools to strengthen the assessment of student learning. In our view, many of these technologies can be used or adapted to elicit, collect, document, analyze, appraise, and display kinds of student performances that have not been readily accessible through traditional testing methods. Furthermore, these technologies open the possibilities of ongoing, formative assessment of investigations-in-progress, in addition to the design of summative, end-of-project evaluations. We argue that a number of the technology applications that support science learning could be extracted, tuned, generalized, and repurposed or redesigned for assessment purposes. Our primary aim is to consider how the affordances of some of these technologies could be used to support more explicit, systematic student assessment and to make these technology-based approaches more widely known to the educational research community, particularly the assessment community.

The scenario in Exhibit 1 depicts ways that technology can revolutionize the integration of learning and assessment.

 

Exhibit 1

21ST CENTURY ASSESSMENT: USING TECHNOLOGY TO SUPPORT STUDENT SCIENCE ASSESSMENT

It is October 2000. Ms. Baron's science class in Portola Middle School is beginning a watershed ecology unit. The students' task is to identify a research question about the valley's watershed, conduct an investigation, and present their findings to the Water Board. Student investigation teams congregate at their computer and log onto the Internet. Martina's team is scheduled to confer on-line with teams from a nearby and a distant school participating in the same module. During the initial brainstorming session, Martina's team meets with its collaborators in a virtual laboratory in TAPPED IN. With their partners on the other two teams, Martina, Tim, and Kit record questions and ideas on a virtual whiteboard and engage in threaded discussions within a framework that scaffolds their planning. They paste their questions and ideas into their electronic Team Project Notebook, which automatically archives the data. At their next virtual meeting, the distributed teams identify their research questions and enter the Project Library to access information resources relevant to their research questions. The Project Library documents and archives students' queries, resources accessed, and time spent and then automatically forwards a summary of each team's searches to Ms. Baron's Study Coordinator's Notebook. Students enter their research notes and possible tools for collecting such measurements as dissolved oxygen and pH into the on-line Team Project Notebook, which is automatically saved after each session.

Off-line, each team meets with Ms. Baron to debrief on its research question, use of resources, and selection of data collection tools. Ms. Baron refers to the archived summaries of each team's searches and a sample of their collaborative interactions with the distributed teams. During this plan check conference, the effectiveness of planning strategies is assessed by criteria developed by the project. Ms. Baron records the ratings and her observations about the teams' and individuals' progress in the Study Coordinator's section of the Team Project Notebook.

Each team makes use of a collaboration tool that assists in assigning and scheduling tasks for each member. Martina and Tim collect water samples. All three students conduct water quality measurements after practicing the data collection protocols with on-line simulations that automatically record their accuracy. Tim records the actual data in the Team Project Notebook. Team members record their data collection procedures and findings in their individual Science Logs. They use criteria for checking their data collection and corroborate these self-assessments in on-line peer review sessions with teams from other schools. Each period, all students take 5 minutes to write down their reflections, questions, and understandings. Martina uses software tools to place her team's data into tables, graphs, and other representations. The software provides various templates and forms of scaffolding and archives what scaffolding is used for Ms. Baron to access later for her progress assessment of tool use. Teams share their developing data sets and analyses with the other partner teams, scientists, and community members in TAPPED IN. On-line evaluation criteria are referenced as teams compare disparate findings and debate alternative interpretations. Ms. Baron and other teachers participate in these conversations and enter their ratings of students' collaboration, analysis, and interpretations that have been archived in the Team Project Notebook Student Logs.

Teams develop a plan for their presentations and can choose to use one of the model presentation templates. Martina's team accesses the case library of exemplars of previous teams' presentations and refers to criteria for evaluating them. All teachers contribute to the teams' discussions and revisions of their presentations. On the day of the Water Board meeting, face-to-face and virtual presentations are evaluated by participating students and teachers using assessment rubrics that have been developed and revised for use by teachers, students, and critical friends.

The presentations and assessment rubrics are added to the Project Exemplar Library. Ms. Baron and other teachers hold discussions of the presentations on-line and during rater training and scoring during the project summer institute held in TAPPED IN.

Technology Tools with Potential for Assessing Scientific Inquiry.

By analyzing some curriculum programs that use technology to support sustained inquiry, we can identify tools embedded in the programs that could be exploited for assessment purposes as well. We have chosen the domain of science as an example, although exemplary technology-supported curricula also can be found in other subject areas. Technology applications have been used in a variety of curriculum projects to support different cognitive and metacognitive components of science inquiry. Figure 1 presents a conceptual model that relates these existing technologies to seven key components of project-based science inquiry curricula: (1) rich environments with authentic problems, (2) collaboration, (3) planning, (4) investigation, (5) analysis and interpretation, (6) communicating and presentation, and (7) monitoring, reflection, and evaluation. Technology applications also have been developed for assembling electronic notebooks or digital portfolios, which can be used to document an individual student's problem-solving efforts and to archive data for a research team, classroom, or school. Other technology applications, such as a digital library, can be used to collect examples of instructional resources, instructional activities, and assessments.

 

 

Figure 1. Conceptual model depicting general components of project-based science inquiry curricula and their relationship to assessment functions.

In the following section, we overview technology applications that have been implemented in a few lighthouse science inquiry projects and how these technologies could be used for rigorous assessment.

Key Components of Project-Based Science Inquiry Curricula

Rich Environments and Authentic Problems.

Technologies are opening possibilities for students to engage in investigations that are beyond typical classroom resources. Students have access to a wide range of physical phenomena through Web-based technologies, visualizations, microworlds, simulations, and microcomputer-based laboratories. Among science curricula that employ technology to create powerful and engaging learning environments is Science Theater/Teatro de Ciencia. This project provides elementary students with a medium for exploring ideas about how things work, such as what makes rainbows, predator-prey relationships, or how tumors form (Lewis, 1996). In GenScope, technology enables students to explore the multilevel processes of genetics visually and dynamically, making explicit the causal connections and interactions between them (Horowitz, 1996). Authentic technology-based problem scenarios in these projects provide an inquiry context and scaffolding for extended investigations. For example, the Learning by Design project has developed overarching problem scenarios in physical science and earth science in which students design a vehicle that can navigate in several different kinds of terrain that might be found on the moon or construct a working model to show how they would save a beach from destruction (Kolodner et al., 1997).

One important advantage of technologically enhanced instructional materials is that these simulations, which employ interactive, visual formats, offer a promising second chance for students who traditionally have experienced difficulty grasping abstract scientific concepts. Yet, these rich, immersive environments tend not to be exploited for the design of systematic, curriculum-embedded student assessments or for standardized performance assessment tasks. Two notable exceptions are projects developed by the Cognition and Technology Group at Vanderbilt and VideoDiscovery. The SMART assessments developed by CTGV periodically embed complex problems, or assessment challenges, within an ongoing investigation (Vye et al., in press). VideoDiscovery is developing science simulations of investigations typically inaccessible or inappropriate in classrooms, such as conditions of plant growth or viral infections. These simulations are intended to be used as student performance assessments and contain investigation tools that could be used by other development teams to design assessments for their inquiry curricula (Clark & Taylor, 1998).

In addition to providing the environment for authentic investigations and design problems, technology can support assessment activities by offering variants of problems and tasks that differ in complexity and level of scaffolding. Thus, technology can offer a range of environments tailored to the skill levels of students.

Collaboration

Collaboration tools can connect learners to vast resources, remote experts, and distant peers. Three threaded discussion tools that have been developed are: "Web-SMILE," developed by The EduTech Institute's Learning by Design project; "Collaboratory Notebook," developed by the Learning Through Collaborative Visualization (Co-Vis) project; and "SpeakEasy," developed by the Knowledge Integration Environment (KIE) project (Kolodner et al., 1997; O'Neill, 1997; Linn, 1997).

Students' development of effective collaboration strategies can be assessed with and by these technologies. For example, as students conduct their research and investigations or initiate their designs, the technology could document the collaborations, automatically sample excerpts of interactions, and calculate frequencies and types of resources and experts accessed. These typically inaccessible data could form the bases of assessments of growth in students' collaboration strategies.

Planning

A number of curriculum projects have also developed digital templates to support student planning of investigations. These templates scaffold such activities as posing a problem, formulating a hypothesis, analyzing problem elements, and selecting investigation methods. Examples are the Learning by Design project's Design Diary and the CoVis Collaboratory Notebook (Kolodner, 1997; O'Neill, 1997). The SMART assessments developed by CTGV ask students to order investigation tools from catalogues and then provide feedback on the appropriateness of their orders (Vye et al. in press). Another planning tool, Planet-Out, is intended to help teams create, assign, and schedule tasks (Hoffman, Krajcik, & Soloway, 1998).

As students engage in such activities, Web-based notebooks that organize and scaffold planning activities can save these plans as digital portfolios, and the Web can host on-line collaborations and conversations about the plans. Explicit assessments of students' planning strategies can take place through peer reviews and the development of criteria, both as the plans are being developed and during end-of-project reflections, on the appropriateness and effectiveness of planning strategies.

Investigation

Visualizations, simulations, and measurement and data collection tools permit investigations typically inaccessible to classrooms. Technologies can make available to students the investigative tools used by scientists and provide access to processes that are not directly observable, are too expensive or dangerous, take place too quickly or too slowly, or are on a scale that is too small or too large. For example, World Watcher/Co-Vis and Global Observations to Benefit the Environment (GLOBE) are two projects that provide students with access to data sets and visualizations about weather and worldwide environmental systems. These curricula have transformed tools and techniques developed for scientists into environments to support students in the development of robust scientific understanding (Gomez, Fishman, & Pea, in press, the GLOBE Program, http://www.globe.gov). Students actually analyze and interpret scientific data by using a variety of technology-based tools. The visualizations used in these projects could also become components of performance assessments to test students' understandings and interpretations of the images as they relate to earth systems concepts. Such investigation tools can both support inquiry and produce records that allow assessments of students' appropriate use of the tools.

Analysis and Interpretation

A highly significant advance enabled by technology has been the development of analysis tools that allow interpretation of complex data sets and creation of descriptive and explanatory models. Spreadsheets and graphing tools are becoming widely available and could be readily incorporated into assessment activities. SimCalc offers a graphing calculator that other projects could use to support analyses and test if they have been done appropriately (Kaput, Roschelle, & Nemirovsky, 1998). The ScienceWare project has Model-It, for constructing and testing models of complex, dynamic systems such as stream ecosystems (Krajcik, Marx, Soloway, Blumenfeld, & Singer, 1998).

As students analyze data and evidence and formulate models, technology tools can support the analyses, in some cases rate their accuracy or appropriateness in comparison with expert models, and save the analyses for further discussion and appraisal. By offering usable, alternative modes for representing findings and conclusions, technology affords multiple modalities for students to show what they know and understand.

Communication and Presentation

Technology offers a variety of aids for formal publication and presentation of the results of investigations, as well as for communication during investigations, as described in the section above on collaboration tools. General-purpose and presentation programs such as PowerPoint can support communication and dissemination of scientific findings; however, a number of scientific inquiry projects have developed customized publication tools to scaffold students' organization of ideas, evidence, explanations, representations of data, and conclusions. One example is SenseMaker, which scaffolds students' organization of their questions, hypotheses, evidence, data, and conclusions for reports of investigations (Linn, 1997).

For assessment purposes, these same tools can support formative conversations about the quality of learning visible in students' work or formal and on-line evaluations using established scoring rubrics.

Monitoring, Reflection, and Evaluation

In addition to supporting ongoing monitoring of investigations and design, some technology applications have been developed to support metacognitive reviews by students of their overall project inquiry or design strategies, outcomes, and possible extensions. Some of these projects have also structured technology-supported activities for the evaluation of reports and products. The Progress Portfolio, developed by the Supportive Inquiry-Based Learning Environment (SIBLE) Project is a general-purpose tool that can be customized to structure the records and support conversations about students' reasoning during an investigation, data generated, and observations and conclusions. The Design Diary prompts students to continually evaluate their designs according to their purpose, structure, function, durability, safety, ease of use, and development costs. Another tool, JavaCAP uses a case authoring tool to foster evaluation and reflection by organizing completed designs into four scenes combining text and images: Problem Presentation, Alternative Selection, Solution, and It's a Wrap (Kolodner et al., 1997).

Thus, technology can organize and store students' iterative designs and solutions for ongoing monitoring and assessment, as well as for retrospective evaluation. As students and collaborators reflect on lessons learned, evaluate reports and designs, and consider extensions, technology tools can support the development and use of scoring rubrics. Furthermore, the analysis and storage capacities of technologies can support interpretations of assessments and facilitate production of results for a variety of audiences.

Digital Libraries

The vast capabilities of technologies for access, storage, analysis, and display are beginning to be tapped to create student assessment resources. Digital archives of investigations and case libraries, accompanied by rubrics and student reports, designs, and products, can be assembled into intelligent, on-line resource libraries that link the collections to science standards. JavaCAP allows students to retrieve design cases from a library of worked-out designs (Kolodner et al., 1997). Performance Assessment Links in Science (PALS) is being developed to provide student science performance assessments of demonstrated technical quality for use by teachers, professional development groups, and classroom teachers (Quellmalz & Schank, 1998). The collection offers more than 100 science investigations spanning elementary, middle, and secondary levels. An accompanying PALS Guide supports use and adaptation of the assessments.

In sum, the research and practice in technology science applications promise to break the constraints that have locked educators into the use of on-demand, closed-ended items that rely on paper-and-pencil formats and measure limited science content and processes. Technology-based tools, with development and investment from the educational measurement community, are candidates for measuring constructivist reforms during the 21st century. To illustrate some features of these technology tools, screen shots are presented from three technology-based curricula. Figure 2, from WorldWatcher, illustrates the use of a visualization that enables students to conduct a global investigation of the atmosphere. Figure 3, from SimCalc, illustrates a graphing tool that students can use to analyze data. Figure 4, from the Inquiry Scorer, scaffolds judgments about the phases and structure of scientific inquiry represented in the students' project reports (Frederikson & White, 1998).

Figure 2. A visualization window from the WorldWatcher software.

 

Figure 3. SimCalc Graphing Tool

 

Figure 4. Typical Inquiry Scorer screen.

Research and Development Considerations in Adapting Technology for Assessment

The expropriation of technology tools embedded in scientific inquiry curricula for use as general-purpose assessment tools raises issues relating to the architectures of the parent technology tools, intellectual property rights and licensing agreements for use of the applications, and utility of the tools for teachers and students. One priority in the quest for technology tools that can be used by new assessment development teams is to identify tools that, with minor tuning and enhancement, can be used for assessment purposes. Currently, a number of the tools described above are Web compatible and cross-platform ready. During the search and analysis for potentially reusable technology components, tools can be categorized according to the level of effort required to repurpose them for general use. At the simplest level, no source code changes would be necessary to apply existing technology tools to another inquiry project. A second level of effort might involve minor changes to source code. A third level of effort might involve adding configurability and interactivity to a tool, a modification that might require minor reprogramming. A fourth level of effort might involve adding a new feature, such as the ability to record student responses. If major redesign and reprogramming are required to convert to another platform or language, the level of effort might need to be negotiated between an assessment project and the original designer.

To facilitate the reuse of the technology tools that are appropriate and generalizable for science assessment purposes, it would be helpful to have guidelines and technical specifications for their use. These guidelines and specifications, or "patterns" (see architecture and software patterns; Gamma, Helm, Johnson, & Vlissides, 1995), that document best-practice technology-supported assessment solutions and the contexts in which they occur could greatly help development teams better apply technology in assessment activities.

To achieve the economy of shared resources, intellectual property rights and licensing agreements would frequently be involved in repurposing technology tools extracted from extant products. The SRI Educational Software Components of Tomorrow (ESCOT) project has formed a range of agreement strategies for engaging publishers, developers, and practitioners in mutually advantageous, collaborative efforts. ESCOT takes the innovative approach of assembling math education software from components rather than handcrafting new programs or applications for each curricular need. Typical components include graphs, tables, and simulations, as well as tools for manipulating geometry and algebra. ESCOT constructs individual educational components so that nontechnical authors can flexibly combine them to compose new activities and lessons.

By creating partnerships among technology-savvy science curriculum developers, educational practitioners, science students, and assessment experts, it will be possible to create prototypes that integrate promising technology-based assessments into models of formative and summative assessments. Research indicates that a participatory design approach that includes teachers and developers is essential in the development of such prototypes (Nielsen, 1993). For example, the ESCOT project has created integration teams composed of teachers and technology developers. Teachers join short-term teams that include developers who have the necessary authoring and programming experience and Internet facilitators who can support the development of on-line conversations, Web pages, and student assessment tasks. Teachers contribute their pedagogical and curricular experience, share the work of their students, and provide field-testing opportunities (Smith & Kollock, 1998); developers contribute programming experience and debugging capabilities.

In a similar vein, assessment design teams can join forces to "share, swap, and ship" - that is, help to make their tools usable by others, to design some assessment applications of others' technology tools to embed in their own curriculum, and to provide the tools for use by new projects. As one example of a way co-design teams can break the mold of traditional testing practice, the SRI 21st Century Assessment project is currently creating a prototype of technology-based science assessments composed of tools taken from curriculum projects. This NSF research effort is leveraging funds from a seed grant from the Assessment Theme Team of the Center for Innovative Learning Technologies (CILT) to pilot test a suite of technology-based assessment tools in urban classrooms using technology-supported scientific inquiry curricula. Students in classrooms participating in the Learning Technologies in Urban Schools (LeTUS) project will pilot the technology-based assessment tools within their classroom science curriculum. The assessment prototype will be composed of tools repurposed from NSF-funded science inquiry curricula such as WorldWatcher, SimCalc, and Learning by Design. Component tools for measuring investigations of climate, analyzing climate data, and critiquing investigation conclusions will be embedded within the NSF-funded Progress Portfolio to support ongoing teacher and student assessment of scientific inquiry strategies. This multipronged research leverages significant research and development investments to further the pursuit of research-based assessment approaches that inform understanding of students' development of science knowledge and inquiry strategies.

A New Era of Flexible Forms

To break the mold of on-demand, superficial testing, educational researchers and practitioners must join forces to exploit the affordances offered by available and emerging technologies. Innovative, technology-supported projects can break free from the constraints of traditional assessment procedures and task designs. Table 1 contrasts the procedures typical of traditional testing with the increased scope and flexibility of technology-supported assessment procedures. Assessment procedures involve replicable administration of items and tasks, collection and archiving of representative samples of performances, scoring and analyses of the quality of performances, and organization, recording, and presenting of results to myriad audiences. In the past, economics, logistics, and the paper-and-pencil format have severely limited testing procedures. In the 21st Century, assessments can be administered to individuals or groups in diverse settings under conditions that can accommodate or scaffold according to the learners' entering levels. Ongoing assessments can be given within curriculum units. Student responses can be collected in digital formats for ready retrieval and analysis. Various forms of automated, categorical scoring will support qualitative and quantitative summaries of performance. Electronic portfolios and case libraries will collect and display student work and digitally link work to standards.

Table 1

Contrasts of Traditional Testing Procedures with Technology-Supported Assessments

Assessment Procedures Traditional Assessments Technology-Supported Assessments
Administer
  • To individual learners
  • One common setting
  • Standardized conditions and procedures
  • Limited accommodations
  • On-demand, arbitrary timing
  • AnnualSummative
  • To individual learners or groups
  • Multiple, distributed settings, labs
  • Documented, flexible conditions and procedures
  • Extensive accommodations/scaffolding
  • Embedded, just-in-time
  • Ongoing or annualFormative or summative
Collect and archive
  • Paper-pencil and optical scan formats
  • One time, one sample
  • Digital text archives
  • Digital video, audio archives
  • Internet-search traces
  • Collaboration records
  • Multiple collections, multiple samples
  • Digital portfolios
Score and analyze
  • Number correct or categorical ratings
  • Quantitative, cumulative data
  • Qualitative and quantitative scoring and interpretations
  • Automated scoring of natural-language responses (e.g., essays)
  • Coding/indexing of constructed responses, performances
Organize, record, and present
  • Graphical displays of scores and ratings
  • Some text work samples
  • Use of electronic portfolios to capture and score student progress recorded in text, audio, and video
  • Links to standards
  • Digital case libraries

 

Technologies also greatly enhance the design possibilities for assessment tasks and items. We have described how technology-supported science inquiry curricula offer tools that can be used more generally to develop science assessments. Table 2 contrasts the item and task designs of traditional tests with the potential designs of technology-supported assessments of scientific investigation strategies. In the assessment of science inquiry, technologies can offer complex, rich problems that can be adapted to students' ability levels and prior knowledge. They can support collaborative research and problem solving and document students' strategies as they plan, design, and carry out research investigations. Technologies can offer access to and present complex, vast resources for exploring research problems and conducting investigations. Simulations and a range of measurement tools vastly expand the kinds of science measurements and analyses students can perform virtually and in the field. Students can access, examine, and manipulate massive data sets presented in multiple representational formats, such as visualizations. Spreadsheets and graphing tools can support analyses and displays of evidence and data. At the same time, technologies can document these ongoing strategies. They can be used to organize and make multimedia presentations of findings from science investigations. Technologies can collect, organize, and scaffold students' metacognitive reflections on their progress as effective problem solvers, at the same time supporting systematic assessment of achievement through case libraries of student work appraised according to rubrics and accompanied by expert commentary.

Table 2

Contrast of Task/Item Designs in Traditional Tests with Technology-Supported Assessments of Scientific Inquiry

Scientific Inquiry Components Traditional Testing Practice Technology-Supported Assessments
Contexts & problems
  • Decontextualized content
  • Discrete, brief problems
  • Rich environments
  • Extensive Web-based resources
  • Access to remote experts
  • Extended, authentic problems
  • Scaffolded/adapted tasks and sequences
Collaboration
  • Typically prohibited
  • Directly assessed in ongoing documentation
Planning and design
  • Seldom tested, then with brief responses
  • Documented and appraised iteratively
Conducting investigations, collecting data
  • Infrequently tested
  • In performance tasks, limited to safe, economical, accessible equipment
  • Addressed in Web searches, hands-on tasks, simulations, and probeware
Analyzing and interpreting
  • Typically limited to small data sets and hand calculations
  • Possible to handle massive data sets, visualizationsConduct complex multivariate analyses and display with spreadsheets, graphing tools
Communication and presenting
  • Occasionally, brief, written conclusions, reports
  • Support and documentation for ongoing informal communication, multimedia reports and presentations
Monitoring, evaluating, reflecting, extending
  • Typically not tested; if so, in brief written format
  • Documented and scaffolded by electronic notebooks, portfolios, on-line multimedia case libraries of student work rated according to rubrics with annotated commentary

 

In this article, we have described but a few of the kinds of co-design efforts that can advance a research and development agenda focusing on improving student assessment. The technology tool kits for science assessments we urge both enrich and anticipate the growing technology infrastructures and capacities of today's classrooms. With these tools, we believe our students can experience more fully the possibilities of education in the 21st century.

Although the development of such forms is ahead of much current capacity to use them, there is ample evidence that such development will occur and such tools will be available as access to these new technologies and expertise in implementing them increase. This trend promises to break the mold of conventional assessment forms and offer students and teachers new forms that seamlessly integrate the affordances of computer-based technologies with assessment and learning.