AI Design Assistant
  • 03 Oct 2024
  • 5 Minutes to read
  • Contributors
  • Dark
    Light
  • PDF

AI Design Assistant

  • Dark
    Light
  • PDF

Article summary

The AI Design Assistant streamlines course creation, saving you time. It allows you the ability to harness advanced AI capabilities to craft learning modules, develop rubrics, build question banks, and design assessments.

Take note

We recommend reading the entire page to appreciate the implications of using these features in an educational environment, including their limitations. Additionally, please be mindful of your responsibility when utilizing these features.

AI Design Assistant Features

FeatureResources
Course structure suggestionsVideo: AI Module Generation
Stepsheet: Course structure suggestions
Discussion generationStep sheet: Discussion generation
Journal generationStep sheet: Journal generation
Rubric generationVideo: Rubric generation video
Stepsheet: Rubric generation
Assignment prompt generationStep sheet: Assignment prompt generation
Test question generationVideo: AI Question Generation
Step sheet: Test question generation
Question Bank generation from Ultra DocumentsStep sheet: Question Bank generation from Ultra Documents
Insert or generate imagesStep sheet: Insert or generate images

Key educational considerations when utilising AI features in clickUP Ultra

  1. All AI-generated output should align with and contribute towards the quality requirements for the module, as set out in the module outcomes and NQF level descriptors

  2. You can manage the quality of the AI-generated content through the description, cognitive level, complexity glider and other settings that are available:

    1. The quality of the prompts you provide in the description box will determine the quality of the AI output.
    2. The cognitive levels that you set should be aligned with the outcomes and assessment criteria of your module or specific assessment task. This is particularly crucial when utilizing the “Inspire Me” setting.
    3. The complexity glider setting should be aligned with the outcomes of your module and the Level of the Higher Education Qualification Framework that your module is aligned with.
    4. ALL output from AI generated content and assessments should be reviewed for suitability, accuracy, bias and other potential issues BEFORE it is added to the module.
Anthology advises the following when writing prompts:
  • Only use prompts that are intended to solicit more relevant output from the AI Design Assistant (e.g., provide more details on the intended course structure).
  • Do not use prompts to solicit output beyond the intended functionality. For instance, you should not use the prompt to request sources or references for the output. In our testing, we determined that there are accuracy issues with such output.
  • Be mindful that prompts requesting output in the style of a specific person or requesting output that looks similar to copyrighted or trademarked items could result in output that carries the risk of intellectual property right infringement.
  • Suggested output for sensitive topics may be limited. Azure OpenAI Service has been trained and implemented in a manner to minimize illegal and harmful content. This includes a content filtering functionality. This could result in limited output or error messages when the AI Design Assistant is used for courses related to sensitive topics (e.g., self-harm, violence, hate, sex).
  • Do not use prompts that violate the terms of your institution’s agreement with Anthology or that violate Microsoft’s Code of Conduct for Azure OpenAI Service and Acceptable Use Policy in the Microsoft Online Services Terms.
  • As detailed in the Limitations section of the Azure OpenAI Service Transparency Note, there is a risk of inaccurate output (including ‘hallucinations’). While the specific nature of the AI Design Assistant and our implementation is intended to minimize inaccuracy, it is our client’s responsibility to review output for accuracy, bias and other potential issues.

Background about AI in clickUP Ultra

The AI Design Assistant utilizes Microsoft's Azure OpenAI Service to automatically generate outputs. It does so by providing limited course information (e.g., course title, description) and prompting the Azure OpenAI Service through its API. Instructors can enhance output generation by including additional prompt context. The output, generated based on the prompt, is displayed in the Learn user interface. For a detailed explanation of the Azure OpenAI Service and the underlying OpenAI GPT large language models, please consult the Introduction section of Microsoft's Transparency Note and the accompanying links.

Microsoft abstains from utilizing any Anthology data or Anthology client data, accessible via the Azure OpenAI Service, for enhancing OpenAI models, improving Microsoft's or third-party products/services, or automatically enhancing Azure OpenAI models for Anthology's utilization (the models remain stateless). Microsoft assesses prompts and output for content filtering to mitigate abuse and harmful content generation. Prompts and output are retained for a maximum of 30 days.

Read more…

Trustworthy AI framework

Anthology has embraced the Trustworthy AI framework, aligning with principles from the NIST AI Risk Management Framework, the EU AI Act, and the OECD AI Principles, with input from global education leaders. Key ethical AI principles include:
Transparency: Providing clear information on AI use, functionality, and how to interpret outputs.
Humans in Control: Users retain decision-making authority, with the option to enable or disable AI features.
Fair AI: Prioritizing accessibility, inclusivity, and minimizing bias, especially for marginalized communities.
Reliability: Ensuring AI output accuracy and reliability amid ongoing technological advancements.
Privacy, Security, and Safety: Upholding stringent standards for AI system security, safety, and privacy protection.
Aligned with Anthology's Values: Anthology upholds the transformative potential of education. AI systems must resonate with human values, especially those cherished by Anthology's clients and users.

Anthology’s Trustworthy AI principles in practice

Reliability and accuracy:

• Anthology emphasizes the possibility of AI-facilitated functions producing inaccurate or undesired output, urging instructors to review text output for accuracy.
• While efforts are made to minimize inaccuracy, users must take responsibility for reviewing output for accuracy, bias, and other potential issues.
• Users should avoid soliciting output beyond intended use cases to prevent inaccurate results.
• Instructors can provide additional context to the AI Design Assistant through prompts and settings, and manually edit outputs before publishing.
Read more...

Fairness:

• Recognizing risks inherent in large language models, Anthology has selected AI Design Assistant functionalities to mitigate harmful bias.
• Users are encouraged to review output to reduce the impact of bias.

Privacy and Security:

• Instructors are cautioned against including personal or confidential information in prompts.
• Limited personal information is used for the AI Design Assistant and managed according to ISO certifications.
• Microsoft's data privacy and security practices are outlined in documentation for Azure OpenAI Service.

Safety:

• Anthology has chosen AI Design Assistant functionalities to minimize the risk of producing inappropriate or offensive output.
• Users are urged to review output to mitigate the risk of unsafe content.
Read more…

Humans in control:

• Users retain control over AI Design Assistant functionalities, which are opt-in features.
• Instructors have control over output, with the ability to review and edit text output.
• The AI Design Assistant does not involve automated decision-making with legal implications.

Intellectual property:

• Users are reminded of the risks of potential intellectual property infringement, especially in prompts resembling copyrighted or trademarked items.
• It is the user's responsibility to review AI Design Assistant output for any potential infringement.


Was this article helpful?

What's Next