Family Handbook
Policies

Generative Artificial Intelligence (AI)

One goal of Crescent School (the “School”) is to develop a love of lifelong learning while guiding and challenging every student at all stages of learning. The School is dedicated to the education of students in a safe and caring community that fosters the development of character, courage, creativity, and a passion for learning.

The emergence of generative artificial intelligence (“Generative AI”) tools has brought new opportunities for meeting those goals, as well as new challenges. As AI tools become incorporated into commonly used systems (e.g. Google), it becomes increasingly important for the School to be clear about what functionalities are allowed or disallowed.

Given the far-reaching implications of this novel technology, a policy on Generative AI is necessary to ensure the School can continue to promote responsibility, respect, academic integrity, civility, and academic excellence in a safe learning and teaching environment.

Generative AI systems are not a substitute for academic rigour or professional ethics and judgment, and using AI-generated information requires evaluation of accuracy and bias identification. Like with any tool, Generative AI can support or harm the learning process depending on how students use it. Students must be able to demonstrate their knowledge and comprehension in situations where Generative AI is not available to them.

Nonetheless, it is unrealistic to expect students, educators and staff to completely isolate themselves from a popular, transformative, and potentially useful new technology. This policy outlines how Generative AI can be used responsibly and practically in the School setting, in certain circumstances.


List of 6 items.

  • Definitions

    “Generative AI” means any foundational computer model or algorithmic tool that can create writing, computer code, and/or images using minimal human prompting. Examples include writing assistant programs like ChatGPT, GPT-4, Microsoft Bing, Google Bard, Jasper, Notion AI, and Caktus ai, image creation programs like DALL-E 2, Midjourney, and Stable Diffusion, and computer coding assistants, like GitHub Copilot.

    “School Community” includes School students, teachers, parents and guardians, administration, staff, and volunteers.
  • Potential Uses of Generative AI

    Potential uses of Generative AI include, but are not limited to:
    • Brainstorming ideas and topics;
    • Preparing an outline, skeleton, agenda, or list for an essay, paper, or other written task;
    • As a memory jogger or refresher;
    • Creating bullet points and graphics for slides;
    • Rudimentary translation;
    • Summarizing longer articles or texts;
    • Explaining concepts or summarizing basic background information on a topic, so long as the information is verified to be accurate;
    • Receiving feedback on grammar, readability, or strength of a thesis and arguments;
    • Debugging and otherwise assisting with writing code; and
    • Assisting users with formulas inside applications such as Microsoft Excel. 
    Whether or not Generative AI is used, all members of the School Community must take full responsibility for their work, including both the process used and the final product.
  • Prohibited Uses of Generative AI

    Prohibited uses of Generative AI include, but are not limited to:
    • Using Generative AI to take a test, write an essay or research paper, wholly or substantially complete any course assignment, cheat, plagiarize, or otherwise commit an academic offence (see Academic Misconduct Involving Generative AI). For example, it is prohibited to use Generative AI to create computer code for a computer programming class, or to create visual art for an art class, where AI-generated art is not the focus of the assignment.
    • Logging in to a Generative AI system using School credentials, unless School credentials are required to access the system (e.g. Generative AI software under license to the School). Unauthorized use of School credentials creates data security and reputational risks.
    • Inputting any data, personal information, or images of a student, staff member, other member of the School Community, the School itself, or any entity related to or doing business with the School, into a Generative AI system. Such conduct amounts to a breach of employee confidentiality obligations, and it may constitute a breach of the School Code of Conduct or other relevant laws and policies.
    • Using output from Generative AI without first disclosing, citing, and meaningfully explaining its use (see Disclosing, Citing, and Explaining Content Produced by Generative AI). 
    • Using output from Generative AI without first verifying the accuracy of the information and performing due diligence to ensure there is no copyright infringement (see Copyright and Plagiarism Considerations).
    • Using Generative AI as a vehicle for bullying, harassment, or other behaviour contrary to the School Code of Conduct, the Employee Code of Conduct, the Employee Handbook, the Human Rights Code, or other relevant laws and policies.
  • Disclosing, Citing, and Explaining Content Produced by Generative AI

    All members of the School community must transparently and responsibly disclose, appropriately cite, and meaningfully explain their use of Generative AI.
    For assignments where the use of Generative AI is permitted, students can fulfill this requirement by submitting an appendix with their assignment which states:
    • which Generative AI tool(s) were used;
    • how they were used, including the prompts used to generate the content; and
    • how the results were incorporated into the submitted work, including a full reproduction and citation of any Generative AI content which was directly incorporated.
    Any content produced by Generative AI must be cited appropriately. Many organizations which publish standard citation formats are now providing information on citing Generative AI.
  • Academic Misconduct Involving Generative AI

    The following section shall be read in conjunction with the School’s academic integrity policy.
    It is an academic offence to:
    • represent AI-generated ideas or content as one’s own ideas or content;
    • use Generative AI to wholly or substantially complete an assignment; or
    • use Generative AI to partially complete or to help complete an assignment, unless:
      • the teacher has given explicit permission, whether written, oral, or otherwise, to use Generative AI to help complete the assignment; and
      • the student’s use of Generative AI falls within the range of acceptable uses the teacher permitted for the assignment; and
      • the use of Generative AI is transparently and responsibly disclosed by the student, using appropriate citations, and the student provides a meaningful explanation of why and how the Generative AI was used.
    Teachers shall continue to use traditional methods for detection of potential academic misconduct, including meeting with a student to discuss their assignment in person. In addition, teachers may use Generative AI to generate potential responses to an assignment prompt, and compare those responses against the work submitted by students for excess similarity.

    The School cautions teachers who use AI-detection software programs on student work. Such software programs have sometimes incorrectly flagged instances of AI use in human-written content. Using such detection programs could negatively impact students if they were to be improperly accused of using Generative AI. Sharing students’ work with these software programs without their permission also raises privacy and ethical concerns.
  • Copyright and Plagiarism Considerations

    As of February 2024, there is significant legal uncertainty regarding copyright and the use of Generative AI. In some cases, the legality of the content used to train AI models is unknown. Authorship and ownership of works created by AI are also unclear, as there are varying degrees of human input in AI-generated content.

    As an interim measure, all members of the School Community must:
    • ensure all AI-generated content is disclosed, cited, and explained, as outlined above;
    • perform due diligence to find the original sources that an AI parsed for information or content, such as by performing internet searches, and to cite or attribute those sources, wherever possible; and
    • use typical plagiarism detection software, tools and techniques, such as Turnitin, to uncover uncited AI-generated content