Skip to Main Content

Generative AI for Research

This Research Guide provides information on the use of Generative AI in academic papers and research, and provides guidance on the ethical use of Generative AI in an academic setting.

Note: Many of these tools cost money to use or to access premium features, like more recent content and faster processing speeds. However, in some cases you can create a basic account for free or explore the tool with a short-term trial. 

  • Never include anyone's Personal Identifiable Information (PII) in your prompts, whether it is your own or someone else's.
  • Students should only use tools within the guidelines established by instructors.
  • Be sure to fact-check any AI-generated content and sources you plan to use in the work you share with others.

Generative AI Tools

Examples of generative AI that can create text content include ChatGPT, Claude, and Google Gemini - these are just three of the available models to choose from.

- These tools are GPTs, or generative pre-trained transformers. They’re generative in that they’re designed to generate new text, which is awesome if you’re trying to outline a cover letter, but not so great if you’re trying to quote and cite published scholarly articles. 

What is it not so good for?

Gemini

  • Gemini is Google’s platform for AI tools, including large language models, reasoning engines, image generators, and video creation tools.  Gemini works well to support background research, brainstorming ideas, summarizing articles, rewriting and clarify for grammar, productivity and planning, and helping with coding.

  • Key Models:

    • Gemini 2.0 Flash – Fast, general-purpose language model for tasks like summarizing and editing.

    • Gemini 2.5 Professional – Advanced "reasoning" model that works step-by-step to produce deeper, more analytical responses.

    • Imagen 3 – Google’s image generation model, built into the Gemini chatbot.

    • Veo2 – Standalone generative AI video creation and editing tool.

  • Access:

    • Some tools are limited to the Gemini Advanced tier.

    • 2.5 is available with limited monthly use on the free tier.

How Tos:

ChatGPT

Chat GPT is the most popular and well known LLM. In addition to being the open AI model that many other LLMs work off of, it also has it's own tool where users can ask a question or put in a command, and it will create a conversation to complete your task. 

Chat GPT may not be the best tool for research, as it still has some "hallucinations" (citations/articles that do not exist), and other tools (like Scite) work more closely with publishers for access to citations outside of open access. It can be helpful for brainstorming, or helping to create summaries of notes or an article that you read. It can also be helpful to get basic background information that you would then take to do your own research in the library databases.

While we do not recommend using ChatGPT as your primary research or writing tool, this guide can provide you with basic details on how it works and what you may be able to use it for.

Claude

Claude, created by Anthropic, is an AI assistant focused on delivering versatile problem-solving capabilities to its users. While excelling at coding, writing, and creative tasks, Claude can also engage in casual conversations and complex analysis

Claude emphasizes a strong ethical framework so that users are provided an ethically safe and transparent experience. This is made possible by Claude's fixed knowledge cutoff, inability to access external data and inability to store previous conversations. Like other AI, Claude has a wide variety of ways to help patrons summarize, and explain information.  

How Tos:

 

Text to Image: If you've ever wished for a tool that could turn your words into photographs, illustrations, or digital paintings, this is it. Describe an image in words, and these tools try to create an image that matches. 

Image upscaling / Increase image resolution tools

Text to Music / Song Composition: Describe a musical composition in words, such as a mood or a theme, and these tools will turn your descriptions into music. 

Voice reproduction (text to speech, speech to speech): Enter some text, or your own recording, and these tools attempt to impersonate someone else. 

Vocal and Instrument Impersonation: These can accurately replicate specific voices or the sound of musical instruments. 

Deepfakes: Imagine a video of a celebrity or politician saying something they never really did. Deepfakes are tools that can edit videos to make it look like someone is doing or saying something they haven't. Think of it like Photoshop but for videos. 

Generative Video: Write a short description, or upload a photo, and generative video tools will aim to produce a video clip based on your description or image. 

3D/Animation: Describe an object or a character, or upload a photo, and these tools will craft it into a 3D model. Or make a 3D model move based on your movements. 

Concerns and Challenges
Generative AI and deepfake technologies are fascinating, but they also bring real risks. These tools can shape how we consume news, trust online content, and even interact with one another. Below are some of the key challenges and potential harms.


Rapid Change
Generative AI is advancing at a staggering pace. While this guide provides a starting point, staying current is an ongoing challenge.

Hyperrealism and Believability
AI-generated media is increasingly indistinguishable from reality. Deepfake videos and synthesized voices can convincingly mimic real people, making it harder to trust what we see or hear. Digital literacy now requires stronger skills in skepticism, verification, and source evaluation.

Erosion of Trust – The “Reality Crisis”
A flood of synthetic media has led to what some call a “reality crisis,” where even authentic videos, news reports, and historical records are questioned. This erodes public trust and can deepen polarization.

Misinformation and Disinformation
AI’s ability to produce compelling content can fuel both misinformation (unintentional falsehoods) and disinformation (deliberate falsehoods), both of which spread rapidly online.

Political Manipulation
Fabricated media of world leaders could sway public opinion or inflame international tensions.

Fake News
Believable but entirely false AI-generated news stories can mislead the public, influencing opinions and decisions based on fabricated information.

Personal Harm
Deepfakes can target individuals, damaging reputations or enabling harassment. Manipulated images or videos can be used for cyberbullying or false accusations.

Criminal Misuse
Deepfakes could support blackmail, fraud, or false evidence creation. In education, students could be wrongly accused of misconduct through fabricated media.

Legal System Impact
As AI-generated content becomes more convincing, courts may struggle to determine the authenticity of photographic, audio, or video evidence.

Vocal Reproduction and Scams
AI can convincingly mimic voices, enabling scams such as impersonating a family member to request money or sensitive information.

Copyright and Intellectual Property
See the “Copyright and Intellectual Property” section of this guide.

Algorithmic Bias and Representation
Generative AI can reflect and amplify biases in its training data, leading to harmful stereotypes or unequal representation—especially of marginalized groups. Ethical deployment requires transparency, diverse datasets, and bias mitigation.

Environmental Impact
Training and running AI models consumes significant computational power, raising sustainability concerns.

Global Regulatory Variance
AI laws vary widely across countries, complicating collaboration and global sharing of AI-generated content. Staying aware of these differences is essential for responsible use.


In the News: