AI @ The FAS

Artificial intelligence (AI) and machine learning (ML) are increasingly impacting multiple domains of scholarship, and with the advent of widely-accessible Generative AI (GAI) tools we have exciting new opportunities for research and new pathways for knowledge. At Harvard we are exploring how GAI tools can open up new ways of teaching and learning, and how natural-language interactions with computational tools can provide improved access to quantitative methods for fields that have not traditionally been computer-intensive. As a rapidly moving disruptive technology, GAI poses significant opportunities and challenges that span the economic, regulatory, ethical, and societal domains and the FAS is committed to advancing those conversations across the full breadth of our community. 

We hope that all members of the FAS community will join in this exploration and encourage you to engage with us. Contact generativeAI@fas.harvard.edu for more information.

Chris Stubbs

A Message from Chris Stubbs

Chris Stubbs, Samuel C. Moncher Professor of Physics and of Astronomy, is Dean Hoekstra’s senior advisor on artificial intelligence. He is leading the charge to understand how we can best use Generative AI tools to enhance our ability to execute on the core research and teaching mission of the FAS. 

FEATURED NEWS

Harvard professor uses AI to replicate himself for tutor experiment

Harvard professor Gregory Kestin is using AI to replicate himself as part of a tutoring experiment. NBC News’ Gadi Schwartz visits the university to talk to Kestin about what he hopes to gain from the process. 

professor Gregiry Kestin in conversation with  NBC News' Gadi Schwartz

FEATURED EVENT SERIES

GAI Dialogues Series

The GAI Dialogues series is a multi-part discussion series to examine how Generative AI impacts FAS’s educational mission. This combines two FAS priorities—civil discourse and Generative AI. While the format will vary, this series is intended to present members of our community who have contrasting perspectives, with questions from the audience as a key element of the conversation.  

Areas of Focus

At FAS we are in a unique position to explore the questions—and the challenges—that come with Generative Artificial Intelligence. Our community is full of brilliant thinkers, curious researchers, and knowledgeable scholars, all able to lend their variety of expertise to tackling the big questions that AI raises, from ethics to social implications.  


AI & Research

Faculty Research &
AI Show-and-Tell

At the AI @ FAS Symposium, faculty
from a variety of disciplines shared
case studies of how they are
incorporating GAI into their research.

AI generated rat images

Want to make robots
more agile?

Scientists create realistic virtual
rodent with digital neural network to
study how brain controls complex, coordinated movement  

Kempner Institute for the Study of Natural and Artificial Intelligence will be headed by Bernardo Sabatini (right) and Sham Kakade.

New University-wide institute
to integrate natural, artificial intelligence

Initiative made possible by gift
from Priscilla Chan and
Mark Zuckerberg


AI & Teaching

Chatbot conversation with clients. Chat bot chatting. Optimizing language models for dialogue. Artificial intelligence robot answer questions, generate refinement conversation, provide smart solution

FAS Generative AI
Faculty Show-and-Tell

Watch ways faculty are using the
latest GAI technology in the classroom.

Chris Stubbs and Logan McCarty (standing) co-teach the new undergraduate GenEd class that looks at the technical, ethical and philosophical issues facing the world as it embraces generative AI large language models like ChatGPT inside Maxwell Dworkin. On the right is Head TF, Tina Wei, a GSAS graduate student in History of Science. Kris Snibbe/Harvard Staff Photographer

Rise of the machines?

Students wrestle with AI
and its impacts in new undergraduate
GenEd class

person typing on laptop.

Harvard GenAI Library for Teaching and Learning

Explore the ways faculty use
GAI tools


AI & Society

person typing on laptop.

AI @ FAS Symposium

The AI @ FAS Symposium brought the community together to learn about
faculty, staff, and students applying
AI innovatively

AI @ FAS Symposium shows the innovative ways faculty, staff, and students are applying artificial intelligence across the Faculty of Arts and Sciences. The event takes place in the Northwest Science Building at Harvard University. Melissa Dell (from left) and Alex Csiszar speak on Featured Panel: “Can we talk about…original scholarship in the age of AI?” with moderator Latanya Sweeney. Stephanie Mitchell/Harvard Staff Photographer

What is ‘original
scholarship’ in the
age of AI?

Symposium considers how
technology is changing academia

‘Harvard Thinking’:
Is AI friend or foe?

In podcast, a lawyer, computer
scientist, and statistician debate ethics of artificial intelligence

Upcoming Events

Check back for upcoming events.

Past Events

In case you missed it…

Additional Resources

OpenAI ChatGPT Edu for FAS

As we continue to navigate the transformative landscape of generative AI, we are excited to now be able to offer ChatGPT Edu to members of the FAS for use in your Harvard work. This development is a significant step forward in the broader AI initiatives underway across FAS and in SEAS, which aim to harness the power of AI to enhance our shared teaching and research mission. 

Harvard University Information Technology

Harvard University Information Technology (HUIT) compiled a webpage that contains guidelines, tools, news, and other resources about using generative AI at Harvard.

AI Sandbox (HUIT/FAS)

The AI Sandbox provides a “walled-off,” secure environment in which to experiment with Generative AI, mitigating many security and privacy risks and ensuring the data entered will not be used to train any public AI tools. It offers a single interface that enables access to seven different Large Language Models (LLM): Azure OpenAI GPT-3.5, GPT-3.5 16k, GPT-4, and GPT-4 32k; Anthropic Claude 2 and Instant; and Google PaLM 2 Bison. The AI Sandbox is approved for use with Medium Risk Confidential data (L3) and below. Before using the AI Sandbox, please review these guidelines and instructions.

Derek C. Bok Center for Teaching and Learning

For educators looking to navigate the landscape of generative AI in the classroom, the Bok Center offers valuable guidance on adapting teaching methods to address the challenges and opportunities it presents.

Resources for FAS Staff

Ismael Carreras, Associate Dean for Strategic Analysis for FAS Administration & Finance, discusses his use of generative AI models like ChatGPT and FAS AI Sandbox for tasks such as drafting documents, summarizing long texts, and creating visuals and code. He discusses one approach that highlights six components when crafting effective prompts to refine outputs, which helps the generative AI model produce useful, high-quality content. 

Exploring Generative AI Administrative Applications  

AI can be a valuable tool for completing various writing and data analysis tasks in the office. Learn more about the ways AI can be helpful for many tasks, such as drafting text like memos and emails, summarizing long documents, generating visuals and acronyms, and writing statistical code. 

Writing Prompts for Generative AI

Writing effective prompts is important when using generative AI models to help generate and refine output. One strategy identifies six key components for crafting a good prompt: task, context, exemplars, persona, format, and tone.

AI @ Harvard

Harvard faculty, students, and scholars are doing cutting-edge research in data science, machine learning, modeling, data analysis, visualization, and ethics in fields spanning computer science, public health, medicine, law, public policy, business, the sciences, and more.

Harvard FAS Administrative Staff Policy on Generative AI (GAI) Use

Generative Artificial Intelligence (GAI) is an emerging and imperfect technology that, when used thoughtfully, can offer valuable support to our work. FAS staff are encouraged to experiment and explore the responsible use of GAI as part of our commitment to supporting Harvard’s research and educational missions. We also encourage robust discussions about domains of appropriate GAI use, and clarification of mutual expectations among colleagues.

Potential applications include, but are not limited to, leveraging GAI as a thought partner for idea development, refining and editing communications such as emails, memos, job descriptions, and formal letters, and enhancing productivity in administrative tasks.

While GAI tools can be useful and continue to improve, they aren’t perfect. Shortcomings include, for example, false or incomplete information and code with errors. Each FAS staff member bears full responsibility for the accuracy, appropriateness, and quality of their work product, including work product generated or assisted by GAI.  Where possible and appropriate, we consider it advisable to acknowledge GAI use in work product. 

Frequently asked Questions

How do I gain access to the Generative AI (GAI) sandbox?

You can sign into the AI Sandbox here: https://www.huit.harvard.edu/ai-sandbox

How do I gain access to a Harvard enterprise OpenAI account?

You can request a ChatGPT Edu license here: REQUEST AN ACCOUNT

What types of information can and cannot be uploaded to Harvard-approved GAI tools?

Harvard-approved GAI tools, such as the AI Sandbox and ChatGPT Edu, are approved for use with up to level 3 (medium-risk) confidential data. This includes:

  • Non-public financial statements
  • University contracts
  • Research administration records not otherwise classified
  • Most Harvard source code
  • Non-security related technical specifications
  • Sensitive administrative survey data (e.g., performance reviews or course feedback)
  • Course materials not otherwise classified
  • FERPA-defined non-directory education record data not containing L4 data

Do NOT upload level 4 (high-risk) data, including:

  • Social Security Numbers, passport numbers, driver’s licenses
  • Financial account information (bank accounts, credit/debit card numbers)
  • Individually identifiable health or medical information
  • System credentials (passwords, PINs, encryption keys)
  • Security controls, system procedures and architectures

Data entered into Harvard-approved GAI tools will not be used to train external models.

Where can I turn for training on the use of these tools?

While we don’t currently have a formal GAI curriculum for staff, there are valuable resources available to get you started:

Ismael Carreras, Associate Dean for Strategic Analysis for FAS Administration & Finance, offers insights on using generative AI models for administrative tasks:

These resources discuss approaches for drafting documents, summarizing texts, creating visuals, and writing effective prompts.

Be on the lookout for events on campus that offer training and discussions on generative AI use. These events are announced through university communications channels and can provide valuable hands-on learning opportunities.

What AI meeting assistants am I allowed to use for meeting transcripts and summaries?

Currently, Harvard has specific restrictions on AI meeting assistants:

  • Approved tool: Zoom AI Companion is the only approved AI meeting assistant, available through a limited HUIT pilot program for administrative use. Faculty and staff must be approved by their School/Unit leadership to participate.
  • Not approved: Third-party AI transcription tools like Otter.ai, Google Meet transcription, Microsoft Teams transcription, or other non-Harvard contracted services should not be used for Harvard meetings.
  • Alternative FAS-approved approach: If you don’t have access to Zoom AI Companion, you can:
    • Record your meeting in Zoom (with participants’ consent)
    • Download the standard Zoom transcript
    • Upload the transcript to the HUIT AI Sandbox or ChatGPT Edu to generate a summary

This approach is suitable for meetings with content up to Level 3 (medium-risk) data, as both the HUIT AI Sandbox and ChatGPT Edu are approved for Level 3 data.

When using any AI tools for meeting content:

  1. Obtain consent: Always inform all meeting participants that you’re recording the meeting and may use AI to help summarize it. Get their consent before beginning.
  2. Consider meeting content: Do not use AI tools for meetings discussing Level 4 (high-risk) data.
  3. Review output: Always review AI-generated summaries for accuracy before sharing them.
  4. Acknowledge AI use: Include a notice when sharing summaries [see below under [Acknowledging the use of GAI in my work”]
  5. Manage retention: Delete meeting recordings, transcripts and summaries when no longer needed, following Harvard’s records retention guidelines.

To request access to the Zoom AI Companion pilot program, you need approval from your unit leadership and submit a request here. You must also Please be sure to visit the HUIT AI Meeting Assistant Guidelines for complete information.

When and how should I acknowledge the use of GAI in my work?

Currently, there are no established standards or norms for acknowledging GAI use in administrative settings. These norms are still evolving. That said, we recommend following two guiding principles:

  1. Helpful Colleague Principle: If you would acknowledge the contributions of a colleague in producing work, consider acknowledging significant GAI contributions.
  2. Deception Principle: If someone might feel deceived upon learning that content was primarily AI-generated, acknowledgment is appropriate.

Examples:

  • Minor edits (spelling/grammar checking) from generative AI models, similar to what traditional tools like Word or Grammarly provide, typically don’t require acknowledgment
  • AI-generated images should be acknowledged, based on the helpful colleague principle (e.g., “Image generated with AI using DALL-E”)
  • Meeting transcripts or summaries generated with AI tools, like Zoom AI Companion, should be acknowledged to maintain transparency about their source and potential limitations, and where to send inaccuracies (e.g., “Transcript automatically created by Zoom AI on [date]. Please note that AI transcription may contain errors.”; “Meeting summary created with assistance from Zoom AI and edited by [your name]. Please submit any corrections to the meeting host.”)

Remember that regardless of acknowledgment, you remain fully responsible for the accuracy, appropriateness, and quality of all work products, including those assisted by GAI.