Skip to Main Content

AI at RUC: Good academic practice with AI

Guidelines for the use of AI

GUIDELINES FOR THE USE OF GENERATIVE AI

Roskilde University has formulated a set of guidelines for the use of generative artificial intelligence (GAI) in written submissions for exams. The purpose of these guidelines is to make it clear for students and instructors how students are allowed to use generative AI as a assistment tool in written submissions for exams. Read the new guidelines here.

All use of generative AI must be clearly declared - read more about the declaration here

 

PLAGIARISM, EXAM CHEATING, AND AI

The rules for plagiarism and exam cheating also apply to Generative AI tools such as ChatGPT and Copilot.

In short, you must not present someone else's work as your own – this includes content created using generative AI.

Good academic pratice and generative AI

GOOD AI PRACTICE

Generative AI (GAI) is undoubtedly an effective tool, including within the academic world. However, in relation to fundamental academic principles such as transparency and reproducibility, there is a risk that it may hinder good academic practices. You should use generative AI thoughtfully, critically, and reflectively. Consider the following pitfalls:

Incorrect information: Chatbots generate likely answers to your prompts and questions, but that doesn't always mean the information is true. The underlying data the chatbot is trained on may be incomplete or outdated, and this can be reproduced in the generated text. This phenomenon is also known as hallucinations, and although the information may appear credible and convincing, it is essential that you always verify facts using other sources.

Incomplete references: If you ask a chatbot to provide references, it is always your responsibility to ensure they are accurate and of good academic quality. Chatbots may generate references to non-existent sources or present sources in an incomplete or misleading way.

Bias in data and algorithms: Generative AI can reproduce and reinforce biases present in its training data. Since it is unclear what data the model was trained on, and the algorithms themselves are often opaque "black boxes", it is even more difficult to account for such biases.

Data security: There is significant uncertainty regarding the security of the data you upload to GAI tools. Therefore, you should not share texts or data containing confidential, personal or non-anonymized information, in accordance with RUC's guidelines for the use of GAI.
RUC also have rules and guidelines regarding GDPR and the handling of personal data - read more here.

Environmental impact: Although AI offers many benefits, it is important to recognize that it comes with environmental costs. GAI requires large amounts of energy and resources, resulting in high CO2 emissions. Consider minimizing unnecessary use of these tools and explore whether alternative tools might be more suitable for the task. 

Copyright: Generative AI raises many copyright-related issues. As a user, you must be aware that you are not allowed to upload copyrighted material to a chatbot. Read more about copyright and AI here.

What is generative AI

WHAT IS GENERATIVE AI?

When it comes to AI, you have likely encountered many different terms and concepts that refer to various aspects of the technology. In this libguide, we provide an overview of key terms to help you use AI in a thoughtful and critical way.

AI (Artificial intelligence), is a broad term for machines and programs that mimic human intelligence. AI has been developed over several decades, but only in recent years has the technology become advanced enough to impact everyday life for most people.

Machine Learning is a central method within AI, and it is primarily through this approach that machines and programs are able to mimic human intelligence. Machines "learn" from patterns in datasets and then become capable of making predictions and performing tasks they were not explicitly programmed to do. 

Generative AI (GAI) is a branch of AI specifically designed to generate new data such as text, images, videos, and more. Generative AI is based on models trained on large datasets and can produce new content based on prompts provided by users.

LLM (Large Language Model), is a type of generative AI model focused on processing and generating text-based content. These models are trained on vast amounts of textual data and use pattern recognition and advanced statistical calculations to produce coherent sentences and likely responses to questions and instructions - also known as prompts. 

Chatbots are among the most widely used tools withing generative AI. They are built on LLMs and allow you to ask questions, give instructions and in general "chat" with the language model. Examples of chatbots include ChatGPT, Copilot, Gemini, Perplexity, and others.
Although chatbots often provide good and plausible answers, there are still many considerations to keep in mind when using these tools. 

Hallucinations refers to instances where AI-generated content contains false or misleading information presented as credible and true. Hallucinations are a significant issue for GAI-models - especially the language models that chatbots are based on. They occur for various reasons, but mainly because the models do not understand the text they produce; instead, they statistically calulate the most likely reponse. Even if sources are provided, these sources may not exist or may be inaccurately represented.

Prompting refers to the user's input to a GAI model - in other words, the instructions and questions you type. Well-crafted prompts can make your use of GAI tools more effective and may reduce certain errors, but they can never fully eliminate pitfalls such as hallucinations.

Copyright and AI

COPYRIGHT AND AI

You are not allowed to share copyrighted material with AI tools. This means that uploading such material in most cases is considered illegal distribution. A typical example of this would be most of the electronic books and articles you have access to through the university library.

Even if the AI tool in question claims that uploaded material will not be stored or shared with third parties, it is still generally prohibited to upload copyrighted content. As with all other aspects of AI use in your studies, you must follow RUC's guidelines, where this is also clearly stated.

The generated output by GenAI tools may also be subject to copyright. This means you may not be allowed to use AI-generated images in your project report without proper permission. Always consult the AI tool's terms of use.

Additionally, all AI-generated material must be clearly declared.

Declaration of use of generativ AI

If you are allowed to use generative AI in a written submission or project, you must remember to declare how and why you used GAI.

You should clearly state how you have used GAI in your work - for example, as part of a the methodology section or as a brief explanation at the end of your report. Read more about the AI declaration here

Access to Copilot for RUC students

As a student at RUC, you have access to Copilot through Microsoft using your login.

Copilot is a chatbot based on the same technology as the GPT-4 version like ChatGPT.

Transcribing your interviews

If you wish to use a transcription tool in your project work, read more about which transcription tools are currently recommended by RUC, as well as how to access the tools by accessing the link below:

  • Access RUC’s serviceportal (sign in with WAYF)
  • Select ”eScience Services”
  • Click on ”Transcription”

THE RESPONSIBILITY IS YOURS. This guide is intended for inspiration and general guidance and therefore does not replace  personal, legal, or academic advice. It is your responsibility to ensure that you comply with applicable regulations.