Boston University Creates Guidelines For Using Chatbots In Classroom

Photo: Madison Rogers (WBZ)

BOSTON (WBZ NewsRadio) — Data science students and professors at Boston University are taking charge when it comes to the ethical use of generative artificial intelligence.

The school's Faculty of Computing and Data Sciences has created a set of guidelines for students and teachers on using programs such as ChatGPT in the classroom. ChatGPT and similar AI chatbots use prompts to mimic human speech, engage in conversation, answer questions, write essays, and other functions.

According to BU data sciences professor Wesley Wildman, who helped craft the policy, the ethical guidelines are a way to embrace AI as opposed to shutting it out.

"The students don’t want us to ignore generative AI," Wildman told WBZ NewsRadio Thursday. "They can’t afford to because it’s gonna be part of their lives forever."

Known as the Generative AI Assistance (GAIA) Policy, the guidelines state that students shall:

  • Give credit to AI tools whenever used, even if only to generate ideas rather than usable text or illustrations.
  • When using AI tools on assignments, add an appendix showing (a) the entire exchange, highlighting the most relevant sections; (b) a description of precisely which AI tools were used (e.g. ChatGPT private subscription version or DALL-E free version), (c) an explanation of how the AI tools were used (e.g. to generate ideas, turns of phrase, elements of text, long stretches of text, lines of argument, pieces of evidence, maps of conceptual territory, illustrations of key concepts, etc.); (d) an account of why AI tools were used (e.g. to save time, to surmount writer’s block, to stimulate thinking, to handle mounting stress, to clarify prose, to translate text, to experiment for fun, etc.).
  • Not use AI tools during in-class examinations, or assignments, unless explicitly permitted and instructed.
  • Employ AI detection tools and originality checks prior to submission, ensuring that their submitted work is not mistakenly flagged.
  • Use AI tools wisely and intelligently, aiming to deepen understanding of subject matter and to support learning.

As well, instructor shall:

  • Seek to understand how AI tools work, including their strengths and weaknesses, to optimize their value for student learning.
  • Treat work by students who declare no use of AI tools as the baseline for grading.
  • Use a lower baseline for students who declare use of AI tools, depending on how extensive the usage, while rewarding creativity, critical nuance, and the correction of inaccuracies or superficial interpretations in response to suggestions made by AI tools.
  • Employ AI detection tools to evaluate the degree to which AI tools have likely been employed.
  • Impose a significant penalty for low-energy or unreflective reuse of material generated by AI tools and assigning zero points for merely reproducing the output from AI tools.

The policy also takes into consideration that some instructors may prefer stronger restrictions on AI tools and are free to impose them, as long as they do so with transparency and fairness in grading, and that the guidelines may need to be revised as advancements in AI are made and the differences between subscription and free programs become clearer.

Wildman stressed that since AI development is not going to stop, there needs to be an open discussion on its future, from elementary schools to Capitol Hill.

"Generative AI is going to have this massive impact on our economy," Wildman said. "It’s going to change the lives of millions and millions of people. People aren’t ready for it, and the ethical side effects of it are really disturbing and quite profound."

WBZ's Madison Rogers (@MadisonWBZ) reports.

Follow WBZ NewsRadio: Facebook | Twitter | Instagram | iHeartmedia App


Sponsored Content

Sponsored Content