December 2023 Newsletter
Happy Holidays! Congratulations on getting through Fall term! In this newsletter, we highlight a few things to keep in mind after a well-deserved holiday break.
Microsoft Copilot, Licensed for USU
Microsoft Copilot, previously named Bing Chat, is a generative AI chat tool similar to ChatGPT and built on the same GPT4 technology. It can generate not only text but also images using OpenAI's DALL-E model. It is now available to USU faculty and staff to access with their USU login at https://copilot.microsoft.com. Although Copilot is available without a login, the login option provides Microsoft's commercial data protection, which means that Copilot adheres to the security measures established with USU's Microsoft 365 account, does not retain prompts or responses, and encrypts chat data. Copilot incorporates Bing search into its chatbot, allowing responses to include recent information from the internet.
In February 2024, students will also be able to login to Copilot with their USU logins. Although logging in will not unlock new chat capabilities, it will provide students with Microsoft's commercial data protection. We share this information in a teaching newsletter for two reasons:
- If you are concerned about data privacy and generative AI, Microsoft Copilot can be one of the safer destinations for students. Even so, submitting sensitive information to Copilot should still be discouraged—especially while the technology is new.
- Microsoft Copilot will display a small USU logo in the corner when students access it with their USU login. This may create a false sense that it is permissible for use in course work. It will be all the more important for faculty to be clear in their syllabus and other communications about their class policies regarding generative AI use.
AI and Academic Integrity
During Fall semester, CIDI saw a rise in questions from faculty about what to do when they suspected a student of unauthorized AI use on an assignment. In most cases, Copyleaks' built-in AI detection had identified the student's work as being potentially AI generated. Responding to such concerns is new territory for everybody at the university, and faculty often wonder what to do.
The answers are not clear cut, and the ideas in this newsletter do not represent the official views of the university or its departments and divisions. Faculty often ask about how reliable AI detection is. Recent studies into its reliability generally acknowledge that existing AI detectors are all capable of producing false positive and false negative results, but research findings are inconsistent as to the likelihood of errors across the existing set of detectors. At its core, AI detection is a measure of probability based upon evolving patterns; therefore, a positive AI detection report should never be accepted without skepticism and additional follow-up practices. Some possible follow-up practices include comparing the assignment to other written work by the student, reading the text for common idiosyncracies, looking for document formatting that could influence an AI detector, looking for alignment or misalignment with the assignment requirements, and having a conversation with the student. In short, a determination of academic dishonesty should include multiple factors in addition to a positive AI detection score. It is best to avoid hasty reactions.
Importantly, before you sanction a student on the basis of academic dishonesty, Article VI of the Student Code states that you must submit an Academic Integrity Violation Form. This ensures due process for the student, as described in Article VI, Section 4. If you do not file a form, "the student may appeal the determination that an academic violation occurred." Additionally, policies regarding the authorized and unauthorized use of AI in your class should be made clear in your syllabus.
Often you may decide there is too much reasonable doubt to charge a student with academic dishonesty—especially considering how impactful such a charge could feel to a student who did not intend to cheat. In these cases, you can still grade the student according to how well they met the assignment requirements without filing any forms. You can also provide the student with advice on how to improve their work.
Technology Updates
The student experience for Canvas Assignments has been updated to make file annotation and rubric feedback easier for students to see and review. Students can also now see their prior submissions to the same assignment, along with the feedback associated with that submission. We expect this will provide a better experience for most users, but faculty who wish to roll back to the earlier student assignment experience can do so by going to their course settings, clicking the feature options tab, and opting to disable Assignment Enhancements - Student.
Hypothes.is users can now add social annotation to their Canvas pages. This is in addition to the ability to use social annotation with files, videos, JSTOR articles, web pages, and Google Drive documents.
CIDI Workshops
CIDI's Spring workshops cover a variety of teaching and educational technology topics. There are many to choose from! Check them out at the workshops page on teach.usu.edu.
Contact CIDI
For on-demand support with teaching technologies, contact CIDI at cidi@usu.edu, via chat, or at 435.797.9506. Schedule an appointment with an instructional designer to get help making your courses more engaging, usable, and accessible. Also see CIDI's full list of workshops.