Like many an old adage, “You can’t teach an old dog new tricks” isn’t gospel truth, so to speak. Proverbs can provoke questioning rather than confirming beliefs and, parenthetically, older dogs are apparently trainable. This phrase came to mind, however, when I decided to use ChatGPT to analyse the results of a survey sent to 173 primary school teachers at Ecolint. What follows are my reflections on that experience.
First, ChatGPT is a powerful tool. The survey contained seven sections with 22 questions related to collaborative teaching practices to which respondents chose from 1 (Never), 2 (Sometimes), 3 (Frequently) and 4 (Always or Almost Always). There were, in addition, six opportunities for open text reflections. It would have taken many, many hours for me to analyse the data from that survey if I had been using paper and pencil or even an Excel spreadsheet to go through the responses one by one the way I had to do in the past.
I have learned quite a bit in recent weeks about the potential pitfalls involved in generating appropriate, precise and consistent prompts to AI tools. Each of the three to four versions of the analysis generated reflects a new stage in my understanding about how to better exploit the potential of AI to process information. I was also reminded of the problem of human error. There was a lot of data to move back and forth in bits and pieces from my spreadsheet to Chat GPT. If I didn’t use the exact same prompt each time, the results differed. If I didn’t make the prompt as precise as possible, then the analysis was not accurate. I also learned a lot about us, the people involved in creating and in responding to the survey. Most of what I learned had to do with language.
We would like to continue to use this tool year upon year in order to gather longitudinal data. When I first presented the content and structure of the survey to our Foundation Research Team, we collectively tried to anticipate potential questions about meaning. Even though we thought this through in advance, conducting the survey revealed a number of interesting things about:
- roles and responsibilities (not everyone self-identified in the way that the school’s organigram would suggest)
- vocabulary (even though we explained that the vocabulary used in the survey was not school-specific, the questions and terminology were highly contextualised for respondents)
- the very nature of collaborative practice.
I ended up with more questions than answers and I suppose this is one of the fascinating things about research. A study may lead you in unanticipated directions or may require a modification of the research question and rightly so. This is, after all, what we help our students to understand.
I will continue to hone my skills in using Chat GPT for data analysis in the future, keeping in mind that what emerges I may wish to challenge. Our powers of discernment and critical thinking remain essential and I feel confident that our students are learning the same thing with the support of their teachers. So, more than tricks and not just old dogs.