This originally appeared on the JiscInvolve Co-design blog

When the COVID-19 pandemic hit in early 2020, it forced UK education institutions to shift teaching online. Many of us assumed that it might only be for a few weeks, but then as spring gave way to summer, it was clear that assessments would have to change markedly and quickly. Three years on, we have reached a point where we can reflect on these rapid changes in practice. Maybe you introduced a number of new software services and are struggling to embed their use, or maybe you feel you failed to capitalise on some of the benefits of digitalisation throughout your digital assessment process.

At Jisc, we have begun the process of classifying digital assessment maturity model to allow institutions the chance to assess their current level of maturity and use it to plan their next steps.

Creating the maturity model

We have split the maturity model into five stages, with each stage denoting a significant shift in practice. The stages are:

  • Approaching and understanding;
  • Experimenting and exploring;
  • Operational;
  • Embedded;
  • Transformational.

The progression from one stage to the next is cumulative; each stage builds from the one before. Within each stage, we have attempted to identify activities from across the wide spectrum of assessment practice within an institution. After introducing the generalised model, you will find more detailed sections for each phase of the assessment process below.

We didn’t intend for the model to be a linear pathway you progress along, but rather a self-assessment checkpoint for your institution (previously I did an individual digital literacies self-assessment tool) and to give you suggestions for the future of digital assessment in your institution. For many institutions, moving along the digital assessment maturity may not be appropriate or favourable. Indeed, achieving ‘transformational’ may not be compatible with your internal practices or strategies.

Please don’t use this as a checklist to determine your stage; use it as a holistic tool to give a potential direction of travel. Remember that Jisc has a range of experts and experience in supporting the development of digital assessment, so please reach out to your Relationship Manager for more information.

Considerations and constraints

We have purposely avoided assessment methods and methodologies, as they are often subjective and pedagogically-driven (Jisc are looking at that elsewhere). Therefore, it was important to focus on the digital assessment process and the underlying technologies as enablers.

We identified five phases of the assessment process: design, creation and resourcing, completion and submission, marking and feedback, and quality management. Although institutions may have their own unique way of approaching digital assessment, we anticipate these areas are suitable for all. However, these phases may appear linearly or concurrently, and different teams in your institution may out the related activities. We will be following up this post with a series that unpacks each area in more detail, giving specific examples of some of the activities you might see for each phase.

Introducing the digital assessment maturity model

A graph background, with a exponential curve increasing from left to right. The graph is split into five sections horizontally. The left-most section is labelled 'approaching and understanding', below the label it says "digital assessments may be used as a supplement to traditional paper-based assessments. Technology is used primarily for administration and data collection purposes'. The second stage is titled 'experimenting and exploring', with below "Institutions are actively exploring and experimenting with different digital assessment tools.". The middle section is titled 'operational'. Underneath it says "Institutions are beginning to 
integrate digital assessment tools 
into their courses and have implemented processes to ensure 
that their assessments are reliable 
and valid." The fourth section is called 'embedded', the text says "Technology is used to provide adaptive and personalised 
learning experiences, to support collaboration and co-creation 
among students, and to provide 
real-time feedback and 
data analysis". The right-most section is called 'transformational', with text "Technology is used to support 
the full integration of assessment 
and instruction, and to support 
the use of data for continuous improvement of process 
and performance." There are a number of examples of evidence below that are elaborated on in the accompanying text.
Digital Assessment Maturity Model. Emma Beatson, 2023

Digital assessments and the digitalisation of the processes surrounding assessment may be rare or non-existent at the approaching and understanding stage. Many activities will be paper-based and manual. Where technology is used to support assessment, it is primarily around administration or data collection. Assessments will be designed to be accessible to all students, which may include the use of digital alternatives, assistive technologies or interventions, such as amanuensis.

Institutions that are actively experimenting and exploring may be using digital assessments in small-scale trials, or looking at alternative assessment modes and methods. This may include automated knowledge-check MCQs (multiple choice questions) and formative assessment. They are beginning to understand the benefits of using technology to assess student learning, and to build the skills and knowledge required to effectively implement digital assessment tools. Institutions may be looking at BYOD (bring your own device) or other policies to ensure students have access to suitable learning technologies.

When digital assessment is operational, it may be used to enhance traditional assessment process and to add optimisations into the assessment process. Essays are submitted via a VLE, exams may have moved online, and the use of anti-plagiarism software may be commonplace. There may be processes and policies designed to ensure digital assessments are consistent, reliable and valid. Discrete systems to support assessment may start being integrated and specified in the curriculum. There is suitable infrastructure and resource to support digital assessment, including Wi-Fi, hardware, and software. Staff and students are trained and supported in the use of new technologies for assessment.

Digital assessments become fully integrated and embedded into the curriculum, when technology begins to provide adaptive and personalised learning and assessment experiences for students, and data analytics for staff use. Assessment technologies are integrated with various institutional systems, such as the VLE, student record system, and curriculum management system, to allow the creation of a holistic picture of student performance. Analytics may be used to trigger interventions for students at risk of failing. There may be purpose-designed spaces on campus for student assessment.

At the transformational stage, digital assessments are used to support and enable different forms of pedagogic practice, such as problem-based learning, competency-based education and personalised learning. Data generated by assessment is used for continuous improvement of process and performance. Institutions have fully integrated digital assessment tools into their teaching and learning. They are actively and critically exploring new technologies and approaches for assessment, and are beginning to use predictive analytics to better understand student learning.

Digital assessment design

When approaching and understanding assessment design, most assessment will be simple, and often paper-based. The design process is often manual. Digital assessment may be sometimes used to supplement traditional assessment methods.

Experimenting with digital assessment may introduce additional modes of assessment, such as MCQs, and may use existing technologies, such as the VLE tools to introduce elements of question randomization and auto grading of basic assessments. The assessment process may involve increased digitalization. The use of digital assessment may be focused on low-/no-stakes formative assessment initially. Technology use is primarily aimed at replicating existing methods of assessing.

As things become operational, elements of personalization may become apparent, especially to support additional needs. There may be some level of automation of the design process through the use of rubric builders or shared resources. Disciplines may receive additional software to support subject-specific requirements, such as mathematical notation. Assessment methods or approaches may be augmented through the use of technology. Specific technologies may be used beyond the VLE, such as in-class MCQ services.

The shift to embedded digital assessment, will see the design process informed by data analytics and may include elements of machine learning to determine the optimum assessment strategy. There will be increased personalization designed into assessments. The use of automated knowledge and application checks as formative assessment will be extensive, giving students real-time feedback on their performance. The assessment methods or approaches may be significantly modified through the use of technology.

In a transformational scenario, assessments are fully dynamic, constantly updated, and personalised to each student. Machine learning algorithms may refine the assessment design to optimise the assessment experience. Assessment design will be integrated into a larger digital learning ecosystem, made of a range of specific services to enable variety and flexibility for staff and students. Assessments will assess a wide-range of skills and applications of knowledge. Assessment design is likely significantly reimagined through the use of technology.

Digital assessment creation and resourcing

At the level of approaching and understanding, assessments are often limited to a small number of assessment types and are manually created. The timetabling and administration of assessments is a manual, paper-based process, relying on the labour of academics and administrators. There may be a lack of consistency in process and practice, potentially leading to errors and scheduling conflicts.

When experimenting and exploring digital assessment creation and resourcing, you may begin to see the use of standardized templates and question banks. There may be some level of automation of the creation process, and the integration of multimedia or interactive elements. Spreadsheets or calendars may be used to record and plan exam and assignment timetables. There is little integration of these systems with others relating to the assessment process.

As digital assessment creation is operationalised, the creation process is more automated and may offer rule-based differentiation or personalisation of assessments, such as the use of adaptive release in VLEs. There will begin to be a level of integration between timetabling, assessment management, VLE, curriculum management, or Student Information systems. Timetabling is fully-automated but may rely on some human ‘tidying’. Assessment load and bunching can be identified and acted upon. Additional resources will be available to students to support them to develop their assessment skills, often within the assessment system itself.

When digital assessment is embedded, integrated timetabling, assessment and student information systems will automatically generate a personalised, flexible and adaptive assessment. Students may have the opportunity to choose the mode of assessment to evidence meeting learning outcomes. The creation process will use data from historical assessments, student performance and demographic data to optimise the creation and resourcing of assessment.

Digital assessment creation and resourcing may be transformational when assessments are fully dynamic. This means the modes, scheduling, and resourcing of assessments may be variable and flexible. Students may be able to complete assessments according to their own schedules, rather than follow the institution’s calendar. Timetabling may be fully automated and actively avoids bunching, and may allow personalisation by booking of assessment windows. Curriculum data from other systems may be used to automatically generate assessment points.

Digital assessment completion and submission

There may be no digital element of assessment at the approaching and understanding the completion and submission process. While students may use word processors or other software to generate their work, submission may be paper- or storage media-based. There is a limited use of technology to manage and track submissions; this is likely paper-based or a standalone system, such as a spreadsheet. In some instances, assessments are printed out for marking, feedback, and archival purposes.

At the experimenting and exploring stage, digital platforms are used for submission, such as email, Google Forms, OneDrive, or basic VLE tools — but this is not consistent. Managing and tracking of submissions is carried out within the system, and may provide students with a digital receipt or record of submission.

When digital assessment is operational, there is use of digital submission of assessments for most modes. Often there will be specialist assessment tools, such as Turnitin or Wiseflow. There may be a level of automated plagiarism checking or grading present, as well as the ability to provide instant feedback (especially in the case of MCQs). Late submissions are automatically flagged, and there may be a notification system for staff and students. There may be some level of automation of reasonable adjustments, but this may rely on significant manual intervention.

The embedded level is similar to operational, but practice is more consistent. The digitalisation of non-traditional assessments, such as videoing presentations or scanning artwork, may be more commonplace, allowing digital marking and feedback. There may be more robust policy and processes to govern submitted media and its archival.

Transformational digital assessment may include the use of analytics and data visualisation to explore long-term trends and personalise student support. Reminders or hints could be released based on student activity, e.g. flagging students who haven’t accessed content or module handbooks. The process for providing additional support or reasonable adjustments may be fully automated, based on student needs, assessment type, and support available. There may be some level of integration between assessment systems and the tool used to complete the assignment.

Digital marking and feedback

The marking and feedback for assessments is manual at the approaching and understanding stage. Staff will grade assignments and provide hand-written feedback to students. The feedback process is often time-consuming and may not be consistent across teachers or assessments.

At the experimenting and exploring stage, there may be introduction of some level of automation in the process. Multiple choice questions may have pre-determined grading and automated per-question or generalised feedback. Feedback begins to be delivered in digital formats. Academic staff have access to suitable equipment to support digital marking, such as two monitors or input devices.

At an operational stage, personalised, digital feedback is provided for each student. There may be some element of aggregation and analysis of marking and feedback to provide both longitudinal development for individual students, and cohort analysis. Academic staff have access to multimedia creation resources in order to provide marking and feedback in the most appropriate manner. For group assessments, it may be possible to mark and feedback based on whole-group and individual contributions within the system. Rubrics will be used, but may not be marked against digitally. To support staff, assessment content may be converted to different media types to best support their working practices or additional needs. There may be provision for second-marking, multiple markers, team marking or batch marking within cohorts.

When digital assessment is embedded, there may be the incorporation of analytics and predictive models to provide adaptive, automated feedback. Feedback may be aggregated for individual students to allow them to note patterns. Rubric use is systematic and wide-spread, with grading and associated feedback digitally captured, and shared with other systems. Group assessments may allow peer assessment or weighting. The use of standard feedback phrases may be partially automated or suggested. Second-marking, multiple markers, team marking, or batch marking may be fully automated based on pre-set criteria.

To be transformational, the marking and feedback process is dynamic and adaptive. Aggregated feedback will prompt skill development for students, and will highlight suitable resources and support opportunities automatically. The use of AI or machine learning may automate some of the marking and feedback process. Feedback may be constantly updated and optimised based on real-time student data. Academic integrity detection may encompass new technologies and methods, such AI authoring. This may include automated checks, or the ability to determine document histories or analysis of snapshots during the production of the assessment submission.

Digital quality management of assessment

When approaching and understanding digital assessment, student assessment data may be stored in ad-hoc, basic tools, such as Microsoft Excel. Sharing data may only take place via email or on-premises network storage. Exam boards may be paper-based, in-person meetings, with manual processes to support them.

You will see the inclusion of digital assessment into the process and management of assessments at the experimenting and exploring stage. Rubrics and standards may be evident, but there may be limited use of technology to support quality control.

At an operational stage, robust quality measures are evident, including peer review or testing. There are likely to be specific software packages through which to record, distribute and analyse academic outcomes. Assessment data will be passed between relevant systems to allow the automation of grade return, feedback collation and management, and exam board processes. Where not explicitly designed out, the use of plagiarism and academic integrity tools may be used.

The embedded stage involves more sophisticated quality control measures, such as digital processes to support external validation, sampling, or accreditation with disciplinary bodies. Assessment data and validated results will be synchronised across multiple systems, and may be used to provide insight into student performance and predictive analytics.

Where digital assessment is transformational, the advanced use of data analytics and machine learning to monitor and improve assessment quality. Assessment data management from a variety of sources will be fully integrated and seamless.

Conclusions

This work is at its very early stages, and we are looking to road test it with the sector. We have plans to work with colleagues from various universities to further refine this model, with the hope that it provides value to senior leaders, timetabling managers, IT directors and Quality Management leaders.

Please contact me if you are interested in being part of this review. If you have any comments or questions, please add them below.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.