Digital assessment is becoming more and more widespread, especially since the sudden switch to remote learning earlier this year. But what’s the role of digital assessment in teaching today and how do teachers really feel about it? Do they trust the role of AI in English assessment? Let’s find out!
This summer, we hosted a series of weekly webinars called Pearson English Assessment Summer Sessions. Our goal was to uncover ways to use Pearson English Proficiency assessments, so teachers can help their students maximize their potential. We explored a variety of topics, including the role of artificial intelligence (AI) in English assessment, and asked participants to answer some survey questions.
Now, we’d like to give you some insight into what fellow teachers think about digital assessment and automated scoring.
Who participated in the Assessment Summer Sessions?
First, here’s a quick overview of the webinar participants. They came from all over the world: the top three countries were Mexico, Pakistan, and Georgia. We also had teachers tuning in from Ukraine, Ecuador, Indonesia, India, Russia, Poland, and Romania, among others.
Over 75% of the webinars’ attendees were teachers, professors, and lecturers, and 65% of them run an online English program. 76% of respondents said they use digital assessment, and 22% use digital assessment only.
Now, let’s take a look at the most striking results of the surveys!
1. 88% of respondents trust in the reliability of AI-scored assessment – only 12% don’t
It’s thrilling to see that for most teachers, the reliability of AI-scored tests is no longer a question.
Just a few years ago, there may have been doubts about the role of AI in English assessment and the ability of a computer to accurately score language tests. But today thousands of teachers all over the world use automated language tests to assess their students’ language proficiency.
For example, Pearson’s suite of Versant tests have been delivering automated language assessments for nearly 25 years. And since its launch in 1996, over 350 million tests have been scored. The same technology is used in Pearson’s Benchmark and Level tests.
So what makes our automated scoring system so reliable?
We use huge data sets of exam answers and results to train our artificial intelligence machine learning technology to score tests the same way that human markers do. This way, we’re not replacing human judgment, we’re just teaching computers to replicate it.
Of course, computers are much more efficient than humans. They don’t mind monotonous work and don’t make mistakes (the standard marking error of an AI scored test is lower than that of a human scored test). So, we are able to get unbiased, accurate, and consistent scores.
To learn more about AI-based scoring, watch the recording of our webinar about the role of AI in education with David Booth.
2. The top benefits of automated scoring are speed, reliability, flexibility, and free from bias
When we asked the participants of our webinar series to list all the benefits of automated scoring, the top results were as follows:
- 37% cited speed as an important benefit
- 28% said reliability was important to them
- 27% said flexibility was a major advantage
- 25% marked free from bias as a top benefit
We’ve already discussed reliability, so now, let’s dig deeper into the other three benefits.
The main advantage that computers have over humans is that they can process complex information very quickly. This means digital assessments like Versant, Benchmark and Level can provide an instant score turnaround. We are able to get accurate, reliable results within minutes. And that’s not just for multiple-choice answers, but complex responses, too.
The benefit for teachers and institutions is that they can have hundreds, thousands, or potentially tens of thousands of learners taking a test at the same time and instantly receive a score. The sooner you have scores, the sooner you can make decisions about placement and students’ language level or benchmark a learner’s strengths and weaknesses and make adjustments to learning that drive improvement and progress.
The next biggest benefit of digital assessment is flexible delivery models. This has become increasingly more important this year, when the COVID-19 pandemic forced schools around the world to shut their doors.
Accessibility became key: how can your institution provide access to assessment for your learners, if you can’t deliver tests on school premises?
The answer is digital assessment.
Versant tests are web-based and can be delivered online or offline, on-site or off-site. All test-takers need is a computer and a headset with a microphone. They can take the test anywhere, any time of day, any day of the week.
At Pearson, we’re constantly expanding our portfolio of digital assessments. We recently launched a new remote assessment product for young learners called English Benchmark Young Learners as well as Benchmark and Level for teenagers and adults. The test for Young Learners is a tablet-based gamified test, which is fun, provides a rewarding experience for learners. All of the tests come with instant scoring and rich data analytics.
To learn more about English Benchmark Young Learners, watch the recording of our webinar with Darren Nicholls from August 11th!
Free from bias
Impartialness is another important benefit of AI-based scoring. The AI engine we use to score our Versant, English Benchmark Young Learners, and our other digital proficiency tests is completely free from bias. It doesn’t get tired. It doesn’t have good and bad days like human markers do. And, it doesn’t have a personality. While some human markers are more generous and others are more strict, AI is always equally fair. Thanks to this, automated scoring provides consistent, standardized scores, no matter who’s taking the test.
If you’re testing students from around the world, with different backgrounds, they will be scored solely on their level of English, in a perfectly objective way.
3. Additional benefits of automated scoring are security and cost
The respondents to our survey highlighted two more benefits of automatic scoring: 16% said security was an important factor and 12% said the same of cost. Let’s take a closer look.
Digital assessments are more difficult to monitor than in-person tests, so security is a valid concern. That’s why we’ve launched remote monitoring within the Versant portfolio.
Remote proctoring adds an extra layer of security, so test administrators can be confident that learners taking the test from home don’t cheat on their Versant tests.
Our software captures a video of test takers. The AI detection system automatically flags suspicious test-taker behavior. Test administrators can access the video any time for audits and reviews, and easily find suspicious segments which are highlighted by our AI.
Here are a few examples of suspicious behavior that our system might flag:
- A different face or multiple faces appearing in the frame
- Camera blocked
- Navigating away from the test window or changing tabs multiple times
- Test taker moving out of camera view
- More than one person in camera view
- Looking away from the camera multiple times
Last but not least, our survey respondents mentioned the cost of automated scoring solutions like Versant, Benchmark and Level as a benefit. Indeed, automated scoring can be a more cost-effective way of monitoring tests, primarily because it saves time.
Pearson English proficiency assessments are highly scalable and don’t require extra time from human scorers, no matter how many test-takers you have.
Plus, there’s no need to spend time and money on training markers or purchasing equipment.