Discussion about this post

User's avatar
Adam Boxer's avatar

really enjoyed this Daisy, thank you. I wanted to ask about point 2 and point 6: every time I've asked it a question similar to the types of question I might ask my students, it's been extremely easy to spot a GPT answer due to length and grammatical/syntactical *accuracy*. of course, i can spend a long time refining the prompt to get it to resemble student work more, but most students won't be doing this (presumably).

when you did your study with the 8 GPT essay-seeds, how much prompting did you have to do before you got an essay that you thought *could* fool a teacher, or were they all in "one take"?

Expand full comment
Adrian Cotterell's avatar

Our school has adopted a structured assessment response to AI:

Red Tasks: No generative AI permissible

Yellow Tasks: Some generative AI is permitted

Green Tasks: generative AI is expected

By focusing on the core assessment constructs of a task helps determine what category they should be.

More explanation here:

https://adriancotterell.com/2023/06/05/focus-on-the-assessment-construct/

Expand full comment
7 more comments...

No posts