The VitalSource Resource

On the Department of Education’s AI Guidelines: Balancing Innovation and Responsibility

Written by Nick Brown | Jul 31, 2024 12:58:00 PM

The U.S. Department of Education (DoE) recently published “Designing for Education with Artificial Intelligence: An Essential Guide for Developers,” aimed at guiding education technology teams, from designers to legal experts, in creating safe, secure, and trustworthy AI solutions in education. 

The DoE’s report is a useful step towards establishing best practices in AI development for education. It covers a wide range of topics, from designing AI for effective teaching and learning to avoiding bias or introducing false information to students in learning contexts. As VP of Product at VitalSource and our Generative AI strategic lead, these topics are close to my heart and core to how our learning science and product teams bring our AI tools to market. While the guide provides a solid foundation, it also raises important questions about balancing rapid innovation with responsible development in the fast-paced world of AI.

1. Alignment with VitalSource’s AI Principles

One of the first things that struck me about the DoE’s guide is how well it aligns with VitalSource’s own AI Principles - you can learn more here about those principles and why we developed and published them publicly. Key themes such as designing for teaching and learning, providing evidence of impact, and promoting transparency are central to both the guide and our approach. We are committed to responsible AI development in education and any organization who hasn’t yet considered what their principles are would do well to lean on the guidance in the DoE report.


2. The Challenge of Rapid Innovation

While the DoE’s guide is comprehensive, I do worry it underplays the sheer rate of change in AI and the impact of that fast pace. If a new model or application of AI has the potential to significantly benefit learners but is withheld until multiple controlled studies of the impact can be published, we risk keeping helpful tools from students’ hands for years. Or worse, students will likely use them without proper guidance or safeguards, and teachers will be stuck picking up the pieces. 

At VitalSource, we’ve tried to address this by building new capabilities, enabled by new AI technology, on top of well-proven and tested research literature. For example, our Bookshelf CoachMe® feature is based on the well-established “Learn By Doing” principle – but leverages AI to write formative practice questions, unlocking significant scale.  

While we didn’t have impact data for CoachMe on launch day (it was brand new!), we had confidence the impact would be there when we studied it because of the track record of the Doer Effect. This approach allows us to innovate quickly while still grounding our work in established educational research. Happily, as more students and instructors have engaged with CoachMe – and to date, more than 15 million AI-generated CoachMe questions have been answered – we have seen the student impact that the research literature predicted. You can follow along with our own contributions back to the research community on our Research page here. 

3. The Role of Humans in the Loop

The DoE guide puts considerable emphasis on keeping “humans in the loop” (HITL). This is often a helpful pattern and for certain use cases, it’s absolutely the right answer. Our recently released mission generator for course authoring in the Intrepid platform is a great example of this approach in action.  

In this tool, the output from the AI may not be perfect, but it is used in an authoring workflow for course creators who are experts. They can edit, tweak, and refine as needed. The AI assists and accelerates their work – without replacing their expertise. 

However, we can’t always expect to have a HITL. If a human could review every tutoring interaction at point-of-use for students, students around the world would already have personal tutors on call. We should use caution not to over-index on HITL as it will reduce the scope of problems we can solve with modern AI models.

4. Moving Forward Responsibly

Balancing these concerns with the need for responsible AI development is a key challenge for our industry. At VitalSource, we’re taking several steps to navigate this landscape: 

  • We actively participate in the 1EdTech committee on governing AI, helping shape industry standards. 
  • We’ve published our AI principles publicly and encourage other companies to do the same. Transparency is key to building trust in AI-powered educational tools. 
  • We maintain strong connections with both the learning research community and those exploring frontier AI capabilities. This dual engagement helps us stay grounded in proven educational practices while also pushing the boundaries of what’s possible. 
  • We’re committed to ongoing research and evaluation of our AI tools, ensuring that we can provide evidence of their effectiveness over time. 
     

An excerpt from our AI principles captures our approach well and aligns with the Dept. of Ed. report: 

“VitalSource is accountable for its use of AI, from decisions on how to apply AI to ensuring quality, validity, and reliability of the output. VitalSource maintains oversight of the output through human review, automated monitoring systems, and analyzing the performance of the AI tools in peer-reviewed publication.” 

The DoE’s guide is an important step in shaping the future of AI in education. While it raises some challenges, particularly around the pace of innovation and evidence requirements, it also provides a valuable framework for responsible development. At VitalSource, we’re excited about the potential for AI to enhance learning experiences and we’re committed to navigating this landscape responsibly. 

As we move forward, it’s crucial that we as an industry continue to have open discussions about these challenges. How can we balance the need for rigorous evidence with the rapid pace of AI development? How do we determine when HITL is necessary and when AI can operate more autonomously? These are complex questions, but by addressing them head-on, we can ensure that AI truly serves the needs of learners and educators. 

I’d love to hear your thoughts on these issues. How do you think we can best balance innovation and responsibility in AI for education? Reach out and let’s talk!