top of page

What We Are Learning: Summer of AI (2024)

The P3 Collaboratory is pleased to continue “Teaching Tuesdays,” our ongoing series on pedagogy in higher ed and on the RU-N campus. This summer's entry was written by Dr. Catherine Clepper.

Decorative image of glass head filled with geometic shapes

Q: How did you spend your summer?

A: I generated love letters from David Bowie (see postscript), images of robots sipping daiquiris*, and thought really hard about that Olympic ad for Google Gemini.


If you’ve been reading email this summer, you may already know that it’s the Summer of AI over here in the P3 Collaboratory. This summer, our team has been doubling down on both our team/individual efforts to understand the capabilities and limits of new AI tools, as well as bringing AI-related training content to the Rutgers-Newark community. 


So far, we have:


We have endeavored to think about AI practically, as in how (and if) to use AI technology in learning environments, while reckoning with the fact that there are no easy answers about AI ethics; see, for example, the ongoing coverage of AI’s ballooning energy consumption and climate impact. We have also tried to engage with emerging research on use of AI’s cognitive impacts on learning, although the jury is still out on many of higher ed’s most pressing questions, such as “Will use of AI impact knowledge retention?” or “How will AI change college-level writing goals and training?” See: The Department of Educational Technology’s report on “Artificial Intelligence and the Future of Teaching and Learning.”


By far the most rewarding aspect of engaging with AI this summer has been the conversations it has opened up with colleagues at Rutgers-Newark, especially the faculty participating in the P3’s Course (re)Design Institute (CDI). AI has been a theme throughout the CDI experience, from our attempts to craft learning objectives that lean in to the left half of Dee Fink’s influential “Taxonomy of Significant Learning” paradigm (see image below); through our discussion and development of AI-embedded or AI-resistant assignments; and finally, to debates around course-level engagement with AI. An approach to the latter that many CDI participants have embraced is co-designing a course AI policy with our students, instead of for our students. This co-creator approach has a lot of benefits in the classroom:


  • It opens up an honest dialogue between students and instructors about how course participants are already using AI.

  • It allows for engaged discussion around what AI can do well vs AI’s limitations.

  • It can add nuance to resulting course policies, normally leads away from an all-or-nothing approach, and creates buy-in from participants.

  • It allows students to explore their own questions, concerns, and curiosities about AI without feeling accused or judged.

  • It can offset the student performance gaps that can emerge when no AI-policy is articulated.**


Now that it is August (yikes!), it may be a good time to brush up on the latest (or newly improved) AI tools, read up on debates / best practices around AI emerging in your field or discipline, or consider how you want to talk to your students, advisees, and colleagues this fall about AI’s role in higher ed. Below are some recommendations to get you started!


Play with AI:


Plug In to Disciplinary Conversations:

  • Check your field’s professional/scholarly societies to learn more on how AI is being used in related industries and/or training programs.

  • Investigate discipline-specific ethical and/or pedagogical concerns around use of AI


Keep Your Eyes Open for Learning Opportunities:


*Full Disclosure: The robot was made by the lovely and talented Fatim Outtara, the P3’s administrative coordinator, using Midjourney.


**Our thanks to Pauline Carpenter and Eliza Blau from SAS’s Office of Undergraduate Education in New Brunswick for presenting this approach to developing AI policy as part of their Rutgers Active Learning Symposium presentation earlier this summer. You can see the slides from their presentation here.


Postscript:


As an experiment, earlier this summer I ran the same prompt (“Write me a love letter from David Bowie”) into four (4) separate GenAIs: Perplexity, Gemini, Claude, and ChatGPT (shown below). I wanted to see, especially with a vague prompt, if detectable ‘house styles' emerged at all from different LLMs. The sample letters produced were all incredibly saccharine.

Screenshot of generated text from ChatGPT once prompted "Write me a love letter from David Bowie."
Fig 2. To say that ChatGPT lacks nuance may be an understatement. Have it generate love letters at your own peril.

Related Posts

See All

Hybrid Teaching and Weathering the Storm

The P3 Collaboratory is pleased to continue “Teaching Tuesdays,” our ongoing series on pedagogy in higher ed and on the RU-N campus. ...

One Year In

The P3 Collaboratory is pleased to continue “Teaching Tuesdays,” our ongoing series on pedagogy in higher ed and on the RU-N campus.

Bình luận


Bình luận đã bị tắt.
bottom of page