The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to his own liking and takes ultimate responsibility for the content of this publication.

I would also like to thank the following individuals and organizations:

The OpenAI team for developing and maintaining the model and making it available to the research community.

My colleagues at the University of Rhode Island, in particular, Joan Peckham, Drew Zhang, Jerry Xia, Maling Ebrahimpour, Richard Levy, Yuwen Chen, Lauren Labrecque, Shaw Chen, Wangsuk Suh, Courtney Hixon, Priscilla Pena, and Todd Dresser for their feedback during this work and for their curiosity about responsible academic use of language models.

Tori Seites-Rundlett, for providing editorial revisions, and Chloe Atlas for fine-tuning language. Terry Atlas for calling this a guide and Barbara Atlas for being the first to sign up for the e-course based on this guide. Family and friends who provided support and encouragement throughout the writing process. 

The researchers, practitioners and organizations who have contributed to the field of AI responsible use, ethics and safety, which helped me to shape my work.

Members of the Facebook and LinkedIn groups interested in education and chatbot technology who encouraged this research.

The Hugging Face's transformers library, which was used to fine-tune the model for the experiments reported herein.

Complete and Continue