Conceptualizing Sherpath, an adaptive education system for nurses, doctors and allied health professionals, was a key step in Elsevier’s transition from “a legacy publisher to a ‘big data’ and analytics company,” recalled Dr. John Danaher, President of Elsevier Education. Determined to create “something fundamentally new,” Dr. Danaher put together a multidisciplinary team, closeted them in an office in New York City and charged them with “doing the opposite of conventional product development.” They were to work together in a fast and focused way, like a tech startup.
The result: they succeeded in creating a first-of-its kind learning system — and learned lessons of their own along the way.
An unconventional team
Elsevier had been partnering for six months with an outside vendor when Dr. Danaher realized that was not the way to build their product. “We agreed that Elsevier needed to develop the system and own all aspects of it. We could not be dependent upon an external vendor for what would eventually become a ground-breaking product,” Dr. Danaher said. “We let go of the vendor, and I took all of our internal ‘experts’ who said they already knew what needed to be built and sat them on the sidelines.”
Instead, Dr. Danaher created a team that cut across organizational lines, made up of “a hodgepodge of senior and junior people, many with piercings and tattoos — people less formed in their thinking and in their ways, who could look at the project more creatively,” Dr. Danaher said. “No one would have looked at this team and said, ‘Oh, they’re working for a publisher.’”
From the outset, Dr. Danaher made it clear that everyone on the team would have an equal voice. “I stressed that we would do this in a collaborative, collegial fashion,” he said. “And by ring-fencing them off from their daily job activities, we ensured that they lived and breathed this project.”
Understanding goals, outcomes and learning styles
Absconded in their own space in Elsevier’s New York City office, the team’s first focus was on the user. In their earliest group discussion, the team members asked themselves, “What do these students care about?” recalled Aaron Zeckoski, VP of Software Engineering for Healthcare Education at Elsevier. The logical response was that they cared about getting good grades.
But Dr. Danaher, who had run nursing and allied health programs for many years before coming to Elsevier, challenged this assumption. He believed the real driver was career placement—students wanted a good, well-paying job.
“The idea that students want a good job doesn’t seem like a huge surprise, but we didn’t realize it until we stepped back and looked at the bigger picture,” Zeckoski acknowledged. “And we agreed that ‘good’ was a key word; it wasn’t just about getting a job.’”
Next, team members asked, “What would the student’s main priority be for a learning system?” Dr. Danaher noted, “From a traditional publisher’s perspective, we thought that the quality and depth of our content would be most important. But we were wrong. Convenience trumped quality.”
Further discussion showed why this made sense. Because of the global shortage of nursing and allied health professionals, the student base for the learning system is often nontraditional—many are older, divorced with children, at a lower socio-economic level and possibly working while attending school. “Most were likely to have multiple, competing priorities,” Zeckoski said. Having a system that allowed them to learn in discrete time slots, when and where they could, was critical. “Essentially, we found that the best content doesn't matter if no one can find it or integrate it into their daily study and teaching workflows,” he said.
The team also looked at the demographics of the teachers, Zeckoski explained, “because that matters too; students often don’t learn without some guidance. So whatever solution we came up with had to be good for the teacher, as well as the students.” Teachers in nursing and allied health are around 50, older than faculty in other disciplines, the team found. And although the belief about new products today is that everything has to be mobile, “we suspected that wasn’t necessarily true for our demographic,” he said.
To test this hypothesis, as well as various prototypes of the system, the team selected a representative sample of 280 students and 20 instructors and did phone interviews, surveys, one-on-one sessions and lab testing.
They asked students and faculty “How do you normally search?” “What kind of machines do you have?” “Do you use a tablet?” “Do you use a mobile device?” They also asked about problems that might interfere with using the system. Key issues included “I don’t have enough time” and “There’s too much going on.”
What emerged from analyses of the responses was that the system needed to be “simple, seamless and supportive,” according to Zeckoski.
“’Simple’ was the first priority. People said, ‘I’m a student, I’ve got a bunch of classes, I’m going to use this for some of them, but not all. So, I need to be able to launch it and use it right away,’” Zeckoski explained. “Teachers said, ‘I’m going to pop in and out of this system. I’m not going to use it every single minute of every day, so it better be easy to use.’”
“Seamless” was a priority because students didn’t want to use existing systems on campus for some assignments, then have to turn to another system for others. In effect, Zeckoski said, “They wanted to go to a single device and see their work magically show up inside it.”
The same was true of teachers. In one survey, the teachers were given eight items to rank in order of greatest importance. The item that emerged consistently as number one was that a new learning system had to integrate with what they were already using.
“Supportive” meant the system had to be adaptable,“ Zeckoski said. “It couldn’t say, ‘Here’s a book, go read chapters one through three.’ And if a student is weak in certain areas, it had to be able to address that, and in a time frame specified by the user. If the user had 20 minutes or an hour, we had to give them a lesson they could finish in that time frame. They didn’t want to have to stop in the middle of a lesson that was too hard for them.”
Personalizing content delivery
Armed with insights gathered through the analyses of student and teacher survey responses, the team had to figure out how best to deliver content in a personalized way. Some felt it was time to go back to the outside vendor who could provide an adaptive search engine. Others believed there was no need to undertake that expense when the product still might not work in a way that best met the users’ needs.
“So we did an experiment,” Zeckoski recalls. The team took book content and randomly presented it to students in one of four ways:
- As a straight pdf;
- In a team-developed module with images, videos and simulations;
- In the same type of module but with some adaptivity—i.e., recognizing when the student was struggling and offering supplemental material; and
- In the same module with a different type of adaptivity — i.e., at the end of a section, a question appeared asking if the student was ready to go on to the next section, and if not, they had the option of going back and reviewing the previous chapter before proceeding.
The plain pdf had the lowest score, the module with visuals came next, and the module that offered supplemental content scored second highest. But the highest score went to the module that simply asked whether the student was ready to move ahead. “It’s a ‘no-fail’ experience,” Zeckoski said, “because if they said ‘no,’ they could review the material until they got everything right.”
“The key take-away was that there was no expensive adaptive engine involved,” he emphasized. “Just providing some support for the students and helping them feel like they had options seemed to have made a difference in their mental state and, consequently, their quiz scores, meaning they learned more. And we posited that if they learned more, they would eventually get better grades, they’d pass their licensing boards and they’d get better jobs.”
Just nine weeks elapsed from the team’s first group meeting through the tests that determined that Sherpath could be a successful product without an outside adaptive search engine. Soon after, a prototype was produced for pilot testing.
Fine-tuning the system
The purpose of the pilot was to get the “MVP, or minimum viable product functionality, into the hands of real students and instructors as quickly as possible for use in actual courses,” explained team member and Product Manager Chelsea Newton. The team was particularly eager to fine tune the functionality of Sherpath’s adaptive Quiz Coach, which Newton said is designed “to help students pass every test standing between them and licensure” — the instructor review exams, their midterms and finals, and ultimately the NCLEX (National Council Licensure Examination) or USMLE (United States Medical Licensing Examination).
Quiz Coach is based on a recommendation system similar to that of Amazon or Netflix, except instead of recommending products or movies, Quiz Coach recommends content that can help students understand specific concepts, Newton explained. For example, for the topic “oxygenation,” Quiz Coach presents a series of learning objectives. Meeting those objectives requires an understanding of other topics, such as the structure and function of the respiratory system or alterations in oxygenation — all of which appear as recommendations.
Quiz Coach also is adaptable on the teacher side. It provides test questions or, if instructors prefer to use their own questions, they can create a blueprint that also serves as a study guide. They can tell students, for example, “My next test will have 100 questions; 20 will be on oxygenation, 40 will be on some other topic, and so on.” They can then save that blueprint to the students’ Quiz Coach so students know in advance the percentage of questions that will deal with specific test topics, enabling them to focus more on areas that make up a larger percentage of the questions, or areas they don’t immediately feel confident enough to answer.
As students worked through Quiz Coach and other parts of Sherpath during the pilot, they generated a list of “weak topics” that needed remediation. But the team was surprised to find midway through the pilot that remediation rates were very low. “Topics were being added to the list,” Newton said, “but very few were being removed.” Were students not spending enough time on weak topics? Were they spending more time studying for their next exam instead?
Analyses of time spent in each section found “pretty much a 50-50 split, so students were spending a fair amount of time in Quiz Coach overall, and that time was equally spread between studying and the current week’s topics,” Newton said. “Clearly, time spent wasn’t the problem.”
The team members had different opinions about why this was this was happening, so they investigated further. They found that each student was generating “a huge list” of weak topics, but were selecting only the first few topics on the list for remediation — and since the list wasn’t generated in any particular order, students weren’t necessarily spending time on the topics that needed the most work.
“Bottom line, we diagnosed it as a user interface problem,” Newton said. “We changed the interface so that weak topics were in priority order. We also fine-tuned the recommendation system to make it more accurate, and ensure that what was offered would be most helpful in improving remediation rates.”
Getting ready to launch
After adjusting other elements of the system in response to user feedback, Sherpath was ready to launch. But some work still needed to be done internally at Elsevier to help integrate the team and the system with the rest of the organization, according to John Kim, Senior VP and General Manager of Learning Solutions at Elsevier.
Ironically, while working in close quarters exclusively on Sherpath contributed to the development of “a product with great potential,” Kim said, “the challenge became how to bring the team back into the organization so that we could operationalize Sherpath, market it and produce more content. We had ended up with two silos — those who developed Sherpath and those who had had nothing to do with it up to that point.”
To help ensure Sherpath’s success, Kim invited “key stakeholders who were not aware or kept abreast of what was happening during development,” including staff from the sales and marketing teams, global operations and customer support, to a working meeting so they could get involved.
“We started having weekly meetings, involving everyone on the go-to market and portability aspects of the program, and had weekly progress reports,” Kim said. “That enabled the others to start taking pride in the the product and feeling like they were contributing to what we saw as the future of the business unit.”
The team also then initiated daily morning “standups,” a process used mainly by development organizations, during which each team member spent 10 or 15 minutes talking about what they had done the day before, what they intended to do and what the obstacles were, Kim explained.
“All in all, with this approach, we were able to get everyone’s buy in and work through any problems very quickly,” he said. “And that ultimately led to a very successful launch.”
Sherpath: assessing outcomes
Elsevier developed Sherpath to support the unique needs of students and faculty in the fields of nursing and allied health. The system provides relevant content, assessments and remediation tools in a mobile-friendly format. Module topics, which continue to evolve, include nursing fundamentals, health assessment, pharmacology, dosages and calculations, medical assisting, maternity, pediatric and medical surgical.
Sherpath launched officially in July 2016. In the year prior to the launch, it was piloted by 1,200 students and faculty at 27 educational institutions across the United States, including nonprofit and for-profit public and private schools with both nursing and medical assisting programs.
The team tweaked Sherpath based on feedback about usability and data demonstrating that certain aspects of the program were not being used as envisioned.
“The most useful insight of the pilot period was about the importance of implementation support,” said Product Manager Chelsea Newton. “Schools used Sherpath in a variety of ways – some successful and some less so – and we were able to build models of successful curriculum implementation to prepare for commercial release.”
After the pilot, the team measured outcomes. Qualitative survey feedback showed that overall, Sherpath contributed positively to student learning outcomes. A majority of respondents agreed or strongly agreed that Sherpath helped them better prepare for class and better understand strong and weak areas and made them more confident in understanding of the course material.
Quantitatively, the team found that Sherpath improved student performance, as empirically measured by Quiz Coach scores and instructor exam scores. Students who took more questions in Quiz Coach had higher success rates than those who did not.
comments powered by Disqus