As the landscape of tertiary education (UK FE and HE sectors) continues to evolve, the integration of accessibility and accessible practices has become a focal point for educational institutions. In part, this was driven by the introduction of the Public Sector Bodies Accessibility Regulations in 2018 and by the social shift to inclusive practice over the last decade.
While the immediate benefits of accessibility in education are often discussed in terms of inclusivity and equal opportunities for students with disabilities, there is another significant advantage that is often overlooked: the upskilling of students in accessibility practices and the wider effect this has on society, long-term.
In the summer of 2023, I worked with South Gloucestershire and Stroud College to investigate embedding inclusive practice, and student accessibility skills was highlighted as one potential approach. This blog post aims to suggest how incorporating accessibility into tertiary education not only fosters an inclusive learning environment but also equips students with valuable skills and graduate attributes that they can apply in the wider world.
The Current State of Accessibility in Tertiary Education
Despite legislative frameworks and guidelines advocating for accessibility, such as the Equality Act 2010 in the United Kingdom, there remains a gap in the full implementation of accessible practices within tertiary education. While many institutions have made strides in providing physical accommodations and assistive technologies, there is still room for improvement in embedding accessibility into the curriculum and teaching methodologies.
Upskilling Students in Accessibility Practices
Incorporating accessibility into the curriculum means that students will naturally acquire technical skills related to making digital content more accessible. This includes understanding how to create accessible websites, documents, and multimedia content. These skills are not only beneficial for students pursuing careers in technology and design but are increasingly becoming essential competencies across various sectors.
Ethical and Social Responsibility
Teaching accessibility as a part of tertiary education also instils a sense of ethical and social responsibility in students. Understanding the importance of inclusivity and equal access encourages empathy and a commitment to social justice, qualities that are highly valued in today’s globalised world.
The 2018 legislation focuses only on public sector organisations, however, by embedding the skills and behaviours of inclusiveness in the education of students, those practices will propagate into private and third sector organisations, leading to a fairer, more accessible future for all.
Problem-Solving and Innovation
Accessibility challenges often require creative solutions. By engaging with these challenges as part of their education, students develop strong problem-solving skills. They learn to approach issues from multiple perspectives and to innovate, skills that are transferable to any professional setting.
The employability agenda has become a major discourse in Higher Education especially. The roles and salaries of recent graduates are used to score the institutions as producing ‘high quality graduates’. These scores hold great weight in university league tables, and therefore, have become a major focus of institutions. By developing students’ accessibility skills and behaviours, this can be a key selling point to potential employers, especially if they are public-facing and outwardly support inclusion.
A shared journey
In many institutions, accessible practices are only just beginning to be developed. By involving students in skill and behaviour development from the outset, it helps foster a shared and better experience and understanding. Approximately, 17% of Higher Education students have some sort of accessibility need (comparable with the general population), so given the greater number of students within an institution compared to staff, it supplies a much greater number of experiences to base recommendations and actions on.
The Wider Impact
When students are upskilled in accessibility practices, they carry these skills into the workforce, thereby contributing to a more inclusive society. Whether they enter the fields of technology, healthcare, public policy, or any other sector, their understanding and application of accessibility practices can make a significant impact. For instance, a student who has learned about accessible web design could influence the development of more inclusive digital platforms in their future employment.
Be the change
The integration of accessibility and accessible practices into tertiary education serves a dual purpose. Not only does it create a more inclusive educational environment, but it also equips students with valuable graduate attributes that have far-reaching implications. As educators and policymakers, it is our responsibility to recognise the broader benefits of teaching accessibility and to strive for its comprehensive integration into the tertiary education curriculum.
By doing so, we are not just adhering to legal mandates or ethical principles; we are investing in the future, empowering our students to become agents of change in creating a more inclusive world.
I’m going to ask you a question. I want you to answer the question quickly and without thinking about it. Please be honest, it’s only for yourself. Here goes:
Can you teach someone without caring about them?
Did you have a quick answer? Now you’ve had a moment to reflect on your answer, does it surprise you? I know the rational, nice, answer is that “of course we have to care”, but the instinctive (honest?) answer is often “no”.
Your initial thoughts might have gone to your students or classmates that you might not know or actually like. Out of a class of 10, 30, 200, you may not know everyone’s names, their stories, who they are…
I have been asking quite a few people this same question, and although a few might sit on the fence, the vast majority of first answers were “no”. In fact, there were very few “yes” answers at all.
So what does that mean about education? We’re all uncaring sociopaths, or is it more nuanced?
Normally, after asking that question, I follow up with a conversation and hear similar post-hoc rationalisations such as “what about YouTube videos? They are teaching you things.”
On the surface, this is a great argument, but one I want to counter by suggesting that YouTube how-to videos, and the like, are instructing, not teaching. You may think that is a minor semantic difference and I am being overly penickerty. And maybe I am.
However, I would argue that the fundamental difference between teaching and instruction is assessment. If assessment does not occur, this is instruction/training, and if assessment does occur, this is teaching.
There is some nuance in what we call assessment, granted. However, if there is a genuine requirement to take information or instruction and to process it, e.g. extrapolating it to different scenarios, then that is assessment. You could call it ‘authentic assessment’ if the wind is blowing in that direction and that phrase is still in vogue. Parroting facts via poorly executed multiple choice questions (the longest answer wins, or it is ‘all the above’) is pseudo-assessment and doesn’t count.
Now, I want to be very clear here: the industries of professional development (i.e. training) and education both utilise teaching and instruction/training. The best corporate training isn’t training at all, it is actually teaching. In the same way that the worst teaching is actually training as the assessments and what’s taught don’t align or have value. There is a place for both, and both will likely exist concurrently without any negative value judgement. But there is a fundamental difference: assessment.
So, coming back to the original question, my hypothesis is that assessment is an act of care about your students and their development. To instruct or train, without the opportunity to test if that knowledge or skill has been developed – i.e. usable in other contexts – is not teaching, and it isn’t an act of caring. You are passing across information to your students without any interest in how it might affect their lives.
Assessment is the activity we use to show we care about not only what our students have learned, but also what change in their actions and behaviours that learning entails.
It’s worth noting that an act of care may not be personalised; but, if you are teaching them, there is an intention that they develop even if you don’t like the person. It’s also important to consider time and space within this hypothesis. I also suggest that displacement of time or location does not matter. This means that it is possible for asynchronous, online learning to be teaching. What matters is the intention of the educator.
If I had asked the initial question differently: do you teach to make a difference to your students’ lives? Then I think the answers would have been overwhelmingly positively. If my thesis is correct, then you have the mechanism to know how and why you do care. It might help you think about making your assessments more meaningful.
NB: There are no references in here because I wanted to extract thoughts from my brain and get them down on the page without the risk of it being actively viewed through someone else’s lens. I would very much like your thoughts and critique of this thesis. I may return to provide more explanation or rigour in the future, but I also may not.
We released v0.1 of the Digital Assessment Maturity Model in March 2023, and spent the rest of the month talking to people from the sector and refining our initial work. This blog post lays out the version 0.2 of the model and outlines how it may be used.
Current assessment landscape
Assessment processes are complex [Citation needed]. Different universities do things differently and use differing terminology, however, the general process is somewhat similar. Our first step in developing a maturity model was to attempt to define the assessment process, shown below:
Note, there will be considerable institutional nuance, but the process chart aims to capture most of the activities that take place to ensure assessments run effectively.
Our second step was then to identify and define areas of activity within the assessment process. We have called these:
Assessment creation and resourcing
Assessment completion and submission
Marking and feedback
Quality assurance and enhancement
These were chosen because they often require the work of different groups of people within the institution. For example, timetabling and estates teams may only focus on assessment creation and resourcing activities in relation to scheduling on-campus digital exams. It also allows us to potentially tag relevant resources in the future.
We are aware that this is an incredibly simplified process map compared to sector practice, and that each of the five activity areas is inextricably linked to each other – even if that is not immediately apparent from the process map.
Growing digital assessment
Digital assessment has increased in UK Higher Education over the last five years, driven by underlying strategy and greatly accelerated due to the COVID-19 pandemic. Universities and Colleges may have found themselves un- or under-prepared for a mass move to online exams, for example, and innovation practice happened at pace.
In these circumstances, enhancements to digital assessment may not have occurred evenly across all aspects of the assessment process. You may have concentrated efforts on creation and resourcing, or completion and submission, as these are the most visible to students, and considered digitising the quality assurance and enhancement element less so.
It is worth noting that while even enhancements across the five activity areas is not entirely necessary, as you progress through the digitisation of assessments, the links between stages can cause bottlenecks in your overall processes where they are at different points in their maturity journey.
Creating the maturity model
We started from the point that accessibility is something that is baked-in to all aspects of the assessment process, and that digital assessment meets all the legal and moral requirements currently in place. While this assumption may be fair, where this is not the case, accessibility should be approached before any other changes as per this maturity model.
We have split the maturity model into four stages, with each stage denoting a significant shift in what you may observe. The stages are:
Approaching and understanding,
Experimenting and exploring,
Operating and embedding,
Transforming and evolving.
The progression from one stage to the next is cumulative; each stage builds from the one before. Within each stage, we have attempted to identify a non-exhaustive list of activities from across the wide spectrum of assessment practice as examples of what you might see at that stage.
The model is not intended to be a pathway you progress along, but rather a self-assessment checkpoint and to give you suggestions for the future of digital assessment in your institution. For many institutions, moving along the Digital Assessment Maturity Model may not be appropriate or favourable. Indeed, achieving ‘transforming and evolving’ may not be compatible with your internal practices or strategies.
Please don’t use this as a checklist to determine your stage; use it as a holistic tool to give a potential direction of travel. There will be some relation between your institutional assessment strategy, however, you should ensure an alignment between your strategy and practices over this model. The most value may come from comparing the maturity stages across all five areas of activity of the assessment process, especially if there are wide discrepancies, and to help update and renew institutional strategies.
Remember that Jisc has a range of experts and experience in supporting the development of digital assessment, purchasing frameworks and services, so please reach out to your Relationship Manager for more information.
Considerations and constraints
We have purposely avoided assessment methods and methodologies, as they are often subjective and pedagogically-driven (don’t worry, we are looking at that elsewhere in Jisc). We have tried to focus on the digital assessment process and the underlying technologies as enablers.
Introducing the model v0.2
We have structured the Digital Assessment Maturity Model so you can scan across each area of activity of the assessment process and get a quick understanding of the differences between each stage of maturity. Each element has a summary title that gives a very basic overview, followed by some examples of what you might see.
In addition, we have created an abridged version.
Based on sector feedback, we made a number of notable changes to the model.
The most obvious change from v0.1 is the move away from a graph to a more tabular representation. While it is anticipated that institutions will move through the model from left-to-right, the graph gave the appearance that ‘rightmost was better’. Some institutions will not find that to be the case.
The second biggest change was reduction of the model from five to four stages through the amalgamation of ‘operational’ and ‘embedding’. We also took the opportunity to tweak some of terminology and provide more context for what each stage means. Although this does take us away from existing Jisc maturity model nomenclature, we felt it made the model more relevant and usable.
The third biggest change is the restructuring of information to provide a better coverage of activities and to layer content detail. While we previously had the model and blog posts for two levels of detail, we now have summary headline, what you might see, and the blog posts. This should ensure the model is useful as an in-depth guide and as an ‘at-a-glance’ tool. This may necessitate the rewriting of the blog posts, but that was outside the scope of the current work.
Further changes were made to :
‘Quality management’ was changed to ‘quality assurance and enhancement’ to better match sector terminology
The assessment process was added, and colour-coded against the model for ease of reference
Additional emphasis that the examples in each stage are indicative of what you might see, rather than a checklist of things to be achieved.
Dropping ‘phases’ in favour of ‘areas of activity’ when describing the assessment process.
Working alongside colleagues from Jisc and the sector, we will be further refining the model, particularly the terminology and examples at each stage. We especially hope to engage with colleagues in quality enhancement, timetabling and estates to assist with these refinements.
As the blog posts are largely redundant now that a lot more information has been moved into the model, we intend to rewrite the accompanying blog posts. We may repurpose them to provide additional context or next-steps.
If you are interested in being part of this review, please contact us. If you have any comments or questions, please add them below.
When the COVID-19 pandemic hit in early 2020, it forced UK education institutions to shift teaching online. Many of us assumed that it might only be for a few weeks, but then as spring gave way to summer, it was clear that assessments would have to change markedly and quickly. Three years on, we have reached a point where we can reflect on these rapid changes in practice. Maybe you introduced a number of new software services and are struggling to embed their use, or maybe you feel you failed to capitalise on some of the benefits of digitalisation throughout your digital assessment process.
At Jisc, we have begun the process of classifying digital assessment maturity model to allow institutions the chance to assess their current level of maturity and use it to plan their next steps.
We have split the maturity model into five stages, with each stage denoting a significant shift in practice. The stages are:
Approaching and understanding;
Experimenting and exploring;
The progression from one stage to the next is cumulative; each stage builds from the one before. Within each stage, we have attempted to identify activities from across the wide spectrum of assessment practice within an institution. After introducing the generalised model, you will find more detailed sections for each phase of the assessment process below.
We didn’t intend for the model to be a linear pathway you progress along, but rather a self-assessment checkpoint for your institution (previously I did an individual digital literacies self-assessment tool) and to give you suggestions for the future of digital assessment in your institution. For many institutions, moving along the digital assessment maturity may not be appropriate or favourable. Indeed, achieving ‘transformational’ may not be compatible with your internal practices or strategies.
Please don’t use this as a checklist to determine your stage; use it as a holistic tool to give a potential direction of travel. Remember that Jisc has a range of experts and experience in supporting the development of digital assessment, so please reach out to your Relationship Manager for more information.
Considerations and constraints
We have purposely avoided assessment methods and methodologies, as they are often subjective and pedagogically-driven (Jisc are looking at that elsewhere). Therefore, it was important to focus on the digital assessment process and the underlying technologies as enablers.
We identified five phases of the assessment process: design, creation and resourcing, completion and submission, marking and feedback, and quality management. Although institutions may have their own unique way of approaching digital assessment, we anticipate these areas are suitable for all. However, these phases may appear linearly or concurrently, and different teams in your institution may out the related activities. We will be following up this post with a series that unpacks each area in more detail, giving specific examples of some of the activities you might see for each phase.
Introducing the digital assessment maturity model
Digital assessments and the digitalisation of the processes surrounding assessment may be rare or non-existent at the approaching and understanding stage. Many activities will be paper-based and manual. Where technology is used to support assessment, it is primarily around administration or data collection. Assessments will be designed to be accessible to all students, which may include the use of digital alternatives, assistive technologies or interventions, such as amanuensis.
Institutions that are actively experimenting and exploring may be using digital assessments in small-scale trials, or looking at alternative assessment modes and methods. This may include automated knowledge-check MCQs (multiple choice questions) and formative assessment. They are beginning to understand the benefits of using technology to assess student learning, and to build the skills and knowledge required to effectively implement digital assessment tools. Institutions may be looking at BYOD (bring your own device) or other policies to ensure students have access to suitable learning technologies.
When digital assessment is operational, it may be used to enhance traditional assessment process and to add optimisations into the assessment process. Essays are submitted via a VLE, exams may have moved online, and the use of anti-plagiarism software may be commonplace. There may be processes and policies designed to ensure digital assessments are consistent, reliable and valid. Discrete systems to support assessment may start being integrated and specified in the curriculum. There is suitable infrastructure and resource to support digital assessment, including Wi-Fi, hardware, and software. Staff and students are trained and supported in the use of new technologies for assessment.
Digital assessments become fully integrated and embedded into the curriculum, when technology begins to provide adaptive and personalised learning and assessment experiences for students, and data analytics for staff use. Assessment technologies are integrated with various institutional systems, such as the VLE, student record system, and curriculum management system, to allow the creation of a holistic picture of student performance. Analytics may be used to trigger interventions for students at risk of failing. There may be purpose-designed spaces on campus for student assessment.
At the transformational stage, digital assessments are used to support and enable different forms of pedagogic practice, such as problem-based learning, competency-based education and personalised learning. Data generated by assessment is used for continuous improvement of process and performance. Institutions have fully integrated digital assessment tools into their teaching and learning. They are actively and critically exploring new technologies and approaches for assessment, and are beginning to use predictive analytics to better understand student learning.
Digital assessment design
When approaching and understanding assessment design, most assessment will be simple, and often paper-based. The design process is often manual. Digital assessment may be sometimes used to supplement traditional assessment methods.
Experimenting with digital assessment may introduce additional modes of assessment, such as MCQs, and may use existing technologies, such as the VLE tools to introduce elements of question randomization and auto grading of basic assessments. The assessment process may involve increased digitalization. The use of digital assessment may be focused on low-/no-stakes formative assessment initially. Technology use is primarily aimed at replicating existing methods of assessing.
As things become operational, elements of personalization may become apparent, especially to support additional needs. There may be some level of automation of the design process through the use of rubric builders or shared resources. Disciplines may receive additional software to support subject-specific requirements, such as mathematical notation. Assessment methods or approaches may be augmented through the use of technology. Specific technologies may be used beyond the VLE, such as in-class MCQ services.
The shift to embedded digital assessment, will see the design process informed by data analytics and may include elements of machine learning to determine the optimum assessment strategy. There will be increased personalization designed into assessments. The use of automated knowledge and application checks as formative assessment will be extensive, giving students real-time feedback on their performance. The assessment methods or approaches may be significantly modified through the use of technology.
In a transformational scenario, assessments are fully dynamic, constantly updated, and personalised to each student. Machine learning algorithms may refine the assessment design to optimise the assessment experience. Assessment design will be integrated into a larger digital learning ecosystem, made of a range of specific services to enable variety and flexibility for staff and students. Assessments will assess a wide-range of skills and applications of knowledge. Assessment design is likely significantly reimagined through the use of technology.
Digital assessment creation and resourcing
At the level of approaching and understanding, assessments are often limited to a small number of assessment types and are manually created. The timetabling and administration of assessments is a manual, paper-based process, relying on the labour of academics and administrators. There may be a lack of consistency in process and practice, potentially leading to errors and scheduling conflicts.
When experimenting and exploring digital assessment creation and resourcing, you may begin to see the use of standardized templates and question banks. There may be some level of automation of the creation process, and the integration of multimedia or interactive elements. Spreadsheets or calendars may be used to record and plan exam and assignment timetables. There is little integration of these systems with others relating to the assessment process.
As digital assessment creation is operationalised, the creation process is more automated and may offer rule-based differentiation or personalisation of assessments, such as the use of adaptive release in VLEs. There will begin to be a level of integration between timetabling, assessment management, VLE, curriculum management, or Student Information systems. Timetabling is fully-automated but may rely on some human ‘tidying’. Assessment load and bunching can be identified and acted upon. Additional resources will be available to students to support them to develop their assessment skills, often within the assessment system itself.
When digital assessment is embedded, integrated timetabling, assessment and student information systems will automatically generate a personalised, flexible and adaptive assessment. Students may have the opportunity to choose the mode of assessment to evidence meeting learning outcomes. The creation process will use data from historical assessments, student performance and demographic data to optimise the creation and resourcing of assessment.
Digital assessment creation and resourcing may be transformational when assessments are fully dynamic. This means the modes, scheduling, and resourcing of assessments may be variable and flexible. Students may be able to complete assessments according to their own schedules, rather than follow the institution’s calendar. Timetabling may be fully automated and actively avoids bunching, and may allow personalisation by booking of assessment windows. Curriculum data from other systems may be used to automatically generate assessment points.
Digital assessment completion and submission
There may be no digital element of assessment at the approaching and understanding the completion and submission process. While students may use word processors or other software to generate their work, submission may be paper- or storage media-based. There is a limited use of technology to manage and track submissions; this is likely paper-based or a standalone system, such as a spreadsheet. In some instances, assessments are printed out for marking, feedback, and archival purposes.
At the experimenting and exploring stage, digital platforms are used for submission, such as email, Google Forms, OneDrive, or basic VLE tools — but this is not consistent. Managing and tracking of submissions is carried out within the system, and may provide students with a digital receipt or record of submission.
When digital assessment is operational, there is use of digital submission of assessments for most modes. Often there will be specialist assessment tools, such as Turnitin or Wiseflow. There may be a level of automated plagiarism checking or grading present, as well as the ability to provide instant feedback (especially in the case of MCQs). Late submissions are automatically flagged, and there may be a notification system for staff and students. There may be some level of automation of reasonable adjustments, but this may rely on significant manual intervention.
The embedded level is similar to operational, but practice is more consistent. The digitalisation of non-traditional assessments, such as videoing presentations or scanning artwork, may be more commonplace, allowing digital marking and feedback. There may be more robust policy and processes to govern submitted media and its archival.
Transformational digital assessment may include the use of analytics and data visualisation to explore long-term trends and personalise student support. Reminders or hints could be released based on student activity, e.g. flagging students who haven’t accessed content or module handbooks. The process for providing additional support or reasonable adjustments may be fully automated, based on student needs, assessment type, and support available. There may be some level of integration between assessment systems and the tool used to complete the assignment.
Digital marking and feedback
The marking and feedback for assessments is manual at the approaching and understanding stage. Staff will grade assignments and provide hand-written feedback to students. The feedback process is often time-consuming and may not be consistent across teachers or assessments.
At the experimenting and exploring stage, there may be introduction of some level of automation in the process. Multiple choice questions may have pre-determined grading and automated per-question or generalised feedback. Feedback begins to be delivered in digital formats. Academic staff have access to suitable equipment to support digital marking, such as two monitors or input devices.
At an operational stage, personalised, digital feedback is provided for each student. There may be some element of aggregation and analysis of marking and feedback to provide both longitudinal development for individual students, and cohort analysis. Academic staff have access to multimedia creation resources in order to provide marking and feedback in the most appropriate manner. For group assessments, it may be possible to mark and feedback based on whole-group and individual contributions within the system. Rubrics will be used, but may not be marked against digitally. To support staff, assessment content may be converted to different media types to best support their working practices or additional needs. There may be provision for second-marking, multiple markers, team marking or batch marking within cohorts.
When digital assessment is embedded, there may be the incorporation of analytics and predictive models to provide adaptive, automated feedback. Feedback may be aggregated for individual students to allow them to note patterns. Rubric use is systematic and wide-spread, with grading and associated feedback digitally captured, and shared with other systems. Group assessments may allow peer assessment or weighting. The use of standard feedback phrases may be partially automated or suggested. Second-marking, multiple markers, team marking, or batch marking may be fully automated based on pre-set criteria.
To be transformational, the marking and feedback process is dynamic and adaptive. Aggregated feedback will prompt skill development for students, and will highlight suitable resources and support opportunities automatically. The use of AI or machine learning may automate some of the marking and feedback process. Feedback may be constantly updated and optimised based on real-time student data. Academic integrity detection may encompass new technologies and methods, such AI authoring. This may include automated checks, or the ability to determine document histories or analysis of snapshots during the production of the assessment submission.
Digital quality management of assessment
When approaching and understanding digital assessment, student assessment data may be stored in ad-hoc, basic tools, such as Microsoft Excel. Sharing data may only take place via email or on-premises network storage. Exam boards may be paper-based, in-person meetings, with manual processes to support them.
You will see the inclusion of digital assessment into the process and management of assessments at the experimenting and exploring stage. Rubrics and standards may be evident, but there may be limited use of technology to support quality control.
At an operational stage, robust quality measures are evident, including peer review or testing. There are likely to be specific software packages through which to record, distribute and analyse academic outcomes. Assessment data will be passed between relevant systems to allow the automation of grade return, feedback collation and management, and exam board processes. Where not explicitly designed out, the use of plagiarism and academic integrity tools may be used.
The embedded stage involves more sophisticated quality control measures, such as digital processes to support external validation, sampling, or accreditation with disciplinary bodies. Assessment data and validated results will be synchronised across multiple systems, and may be used to provide insight into student performance and predictive analytics.
Where digital assessment is transformational, the advanced use of data analytics and machine learning to monitor and improve assessment quality. Assessment data management from a variety of sources will be fully integrated and seamless.
This work is at its very early stages, and we are looking to road test it with the sector. We have plans to work with colleagues from various universities to further refine this model, with the hope that it provides value to senior leaders, timetabling managers, IT directors and Quality Management leaders.
Please contact me if you are interested in being part of this review. If you have any comments or questions, please add them below.
In education, we often hear the same issues cropping up again and again: lecture capture, learning analytics, internationalisation, and so on. These are all important, and we often approach them thematically, within their own bubbles.
In July 2018, I travelled out to the USA with the generous funding of UCISA to attend the Digital Pedagogy Lab 2018 (DPL18). This blog post will be some of my reflections about what I took part in, learned, and applied.
In July 2018, Sue Watling (a former colleague) and Lee Fallin published a fantastic resource based on the Home Office’s accessible design work. The main output was a fantastically simple poster with the dos and don’ts of accessible design, specifically in an educational context.
Coincidentally, my colleague Elaine Swift brought to my attention the NHS Design Principles. These are intended for both graphic design and service design. They struck a chord with me, and I immediately thought they would, with a few tweaks, make a great set of overarching principles for Sue and Lee’s work.
So, I put them together. Please feel free to comment, remix, help build together. But most of all share, even if it isn’t perfect. The more people who know about, and act in accessible ways, the better.
On Tuesday we delivered the first session of our new Associate Lecturer Programme to a group of PGRs, technicians, managers and other staff members who don’t ‘formally’ teach but undertake teaching-like activities. Leaving aside that for a moment, I just want to share one activity with you. Continue reading “Making the list”→
[First draft – this has been ruminating for a while, so I thought I would put it out there for your thoughts]
In 1997, Stephen Jay Gould, a paleontologist and evolutionary biologist, proposed a solution for a problem that had caused turmoil, disruption, bloodshed and even death throughout the ages: how to reconcile the conflict between science and religion.
Gould defined science and religion as magisteria or “domains where one form of teaching holds the appropriate tools for meaningful discourse and resolution” (1999). In effect, science defines the natural world, and religion defines the moral world. And thus, the two are never to meet. Gould coined the term ‘Non-overlapping magisteria’ (or shortened to NOMA).
Richard Dawkins, in his 2006 book The God Delusion pretty efficiently picks holes in Gould’s ideas, as did many others, such as Paul Kurtz and Ursula Goodenough. These are both literal and metaphorical holes puncturing the divide between the Magisteria. As a scientist and humanist, I must nail my colours to the flag, yet I can see many ways in which both realms could lay claim to the same ideas.
It’s a tough one, but I am inclined to think that NOMA just didn’t quite work out. Scientific experiment sometimes requires belief, in the same way that religion looks for evidence. Science should (and bloody well could do it better) be concerned with the ethical and moral implications of its activities, and religion attempts to explain natural phenomena (IMHO wrongly).
In the last four paragraphs I have introduced the idea of NOMA, and then proceeded to claim it largely debunked. So, where am I going?
I don’t think I could ever claim to solve the thorny issue of religion and science’s relationship, but I do think in different contexts, the idea of non-overlapping magisteria could have some traction.
Over the last year I have become increasingly frustrated with Twitter. While gaslighters, TERFs, Nazis and trolls try to control the narrative, I try to close my eyes and pretend it isn’t happening. However, one thing I have opened myself up to is edtech companies selling me their ‘innovative’ services.
My goodness they are annoying. [You will all have many, many examples, so feel free to link to them in the comments below – where I will be harvesting your personal data]
It got me thinking. My job is to help academics to use technology better to support their teaching and their students’ learning. I use technology every day, and I advocate its use. I am paid to do that. However, I am becoming more and more uneasy with this status quo.
I think I can just about manage to frame technology as a tool to enable educators to improve the student experience. Just. But it is just a tool – when I put on my educator hat (pah, it never comes off), technology is just one of the ways I will try to engage my students.
However, in my dealings with edtech businesses, I am finding I am increasingly turned off by their approaches to education. They have adapted their slick sales machines to use the langauge of education; to infect education with their own phraseology.
Hands up if you’ve heard talk of synergies, solutions, paradigm-shift, next generation, bespoke, and innovation, innovation, f-ing innovation.
I propose that Education and Technology (specifically edtech) are non-overlapping magisteria. The divide should not be porous.
Edtech does harm to individuals by aggregating them into amorphous whole that we wouldn't do to them one-by-one. People do not equal data #cspi18
Lights appear strung across shopping streets like Victorian washing lines. The same four songs are playing in every single shop. Retail workers have that glazed-eye expression from hearing the same four songs on a loop all day. Everyday packaging suddenly is adorned with pictures of holly leaves. Everybody is ‘festive’. Continue reading “What Christmas teaches us about education”→