Using AI: Think Before You Act
Oct 28, 2025
Artificial Intelligence (AI) is the latest tool to enhance the way you do your work. As with any tool, it is important to use it wisely and be aware of limitations and security risks. Before using any AI platform, either at the university or at home—be mindful of using it with the following types of information.
When using AI with your work for the university, be sure to use one of the university-approved AI tools that has already been assessed for risk and compliance.
Using AI at the University
Confidential Data:
Safe to Use: The university has approved Microsoft 365 Copilot and Copilot Chat for use with Confidential and Highly Confidential data and information, but only when you log into the university instance. Copilot Chat is included in your university Microsoft 365 Office license, and Microsoft Copilot 365 can be purchased for an annual fee.
M365 Copilot and Copilot Chat have been approved for use with Confidential and Highly Confidential data because it will not retain the information you enter into it and will not use it to train their Large Language Model (LLM).
Hallucinations:
Hallucinations are a third limitation of AI tools. A hallucination is when generative AI produces a false statement as fact. This is because the AI doesn’t know the difference between a reputable source on the internet vs fallacy or misinformation. It is also known to create hallucinations when the AI tool cannot find an answer and will make one up.
Safe to Use: If you are intentionally having AI help you craft something silly or off-the-wall for humor or satire, then the risk is minimal because you didn’t intend for the result to be factual.
Be Cautious: For all other uses, it is important to fact-check the results AI provides and review the original source material to determine if it is legitimate and accurate. Sometimes the results are close, but the interpretation by the AI was off.
For example, an AI tool might reply “yes” if you asked it “has a puppy won the Stanley Cup?” because the NHL hosts a similar competition called the “Stanley Pup” and the AI might not know the difference. You would only know that the answer was wrong if you clicked on the source and read that it was a separate competition for pups that the NHL put on.
Using AI at Home
Confidential Data:
Be Cautious: Most AI tools store the data you enter into it and use that information to train their AI responses. This means that your data can be used by the AI to respond to someone else’s prompt, thereby sharing your confidential information with anyone on the internet. This makes it extremely vulnerable to hacking and identity theft. Without the proper data protection agreements in place, that puts your data at risk every time you use the public version of ChatGPT and any other AI platform.
Medical Advice:
Safe to Use: The best source for understanding a medical condition is your doctor or specialist.
While it is common to research symptoms or conditions using the internet to better understand what you may be feeling, double check to ensure the AI uses a reputable source such as medical websites, books, articles and journals. Never jump to conclusions, always seek medical advice from your doctor or specialist.
Be Cautious: You should never use an AI tool or the internet to diagnose medical symptoms. This is highly unreliable, and these tools are not designed to replace the problem solving and expertise of a trained and licensed professional.
AI tools have limitations that play key roles in giving bad advice:
- Non-Determinism = Generative AI (Copilot Chat, ChatGPT, etc.) will produce different results every time even if the exact same prompt is used. This means that any symptom analysis it provides is unreliable, and you don’t know what source on the internet it is pulling it from.
- Limited Scope = Generative AI is unable to process broad challenges like taking multiple symptoms or reactions and coming up with a diagnosis based on the combination. It focuses on cause and effect and will interpret the symptoms one at a time, not as a whole. This will give you an inaccurate diagnosis.
- Hallucinations = Generative AI produces a false statement as fact. It either found misinformation in the LLM or made up an answer because it couldn’t find one.