Tales from the Chat Log

Recently Alex Kotran, our CEO and cofounder, gave a presentation on generative AI titled Transforming Education with AI: What Educators Need to Know. Hosted by edWeb and sponsored by Common Sense Education, it was a great presentation, which you can watch by signing up with EdWeb here. I am not going to write (much) about the presentation itself, but I am here to tell you about the chat. Why am I writing about the chat of a webinar? At aiEDU we spend a lot of time working on ways to help educators, whether that’s developing curriculum or other resources for students, or professional development and trainings for educators. One of the best ways to figure out what educators want and need is to listen to them. So we pay close attention to the chats of presentations that we give and attend, and this one struck me as worth sharing.

As he was getting started, Alex pointed out that before November 2022, there were very few people with access to generative AI. He closed his introduction with:

“No matter where you are, even if you’re just beginning your learning journey, you’re not that far behind . . . Everybody is just starting this process of wrapping our heads around this, so I would invite you to give yourself some grace. If it seems overwhelming, it feels overwhelming to me as well.”

I think this is an important point to keep in mind. I know I am certainly still overwhelmed by all the new publicly available generative AI tools that have come out in the past year. I haven’t had a chance to play with nearly as many of them as I would like. You might be excited about generative AI tools in the hands of your students, you might have grave concerns, be somewhere in the middle, or unsure. All of those positions are okay. For my part, I am excited about a lot of it, but I also have concerns. A lot of my former colleagues in the classroom are having a difficult time navigating this new landscape, and I saw more or less the same thing as I watched the chat during Alex’s presentation.

In the beginning, Common Sense Education’s Jennifer Ehehalt greeted everyone and invited them to introduce themselves and include their location and position. There were 565 attendees, and while not everyone introduced themselves, more than 200 did, and it was a very diverse audience. Most of the attendees were teachers, and they were spread mostly across middle and high school in different subject areas. There were also quite a few media specialists/librarians, digital learning and technology integration specialists, a few instructional coaches, and a couple of school and even district level administrators. There were also a few folks from higher education institutions, at least one school psychologist, and even a museum administrator. People were from all over the US, from Hawaii to the US Virgin Islands and everywhere in between. Internationally I saw people hailing from Jordan, Saudi Arabia, South Africa, The Turks and Caicos Islands, Spain, and Uganda. All of this is to say, this  was a fairly diverse crowd of education professionals across grade level, position, and geography.

One of the first tools demonstrated was Stable Doodle by Clip Drop, which transforms a doodle you draw on your computer/device into an image. Here’s an example. My goblin is at the top left, and the other three squares are what the AI created:

I had a lot of fun with this (I know, I should have been an artist), and I can imagine a number of ways to use it with students. The example Alex used in his presentation was way better than mine, and people were definitely impressed. Someone then posed in the chat, “Is it developmentally beneficial to have five year old students draw something and then see AI take it and enhance it?” 

It’s a great question. How might young children feel or react to a drawing they spent time working on suddenly brought to life by a computer program? Would budding artists give up? Would they be inspired? There were a number of responses, some in favor of using a tool like this with young children, some against, and most just wondering. It made me think of this story from the BBC in 2018 that discusses a decline in the manual dexterity of surgeons in training and proposes that advances in technology (i.e. touch screens) have resulted in young people today being less competent and confident with their hands. I don’t care to weigh in on that debate without doing some more research first, but it’s certainly an interesting idea.

While Alex discussed more aspects of AI image generation, the chat naturally progressed to other concerns about AI art. The next comment to kick off a discussion was “we need to teach students skills to identify AI generated images.” This sparked a chorus of agreements from simple “agree” statements to “media literacy is more important than ever now” and a few voices suggesting there needs to be legislation or accountability for using AI generated images. A few folks pointed out that it is becoming increasingly difficult to recognize AI generated content, and then someone offered “I think we're going to get to the point where we can't recognize AI generated images. We might already be there. The key is going to be lateral reading--checking sources for verification of the content we're seeing.” A few comments followed on the need to be skeptical of all media moving forward. 

As a former school librarian, I waited for the comment I was hoping would come, and I was not disappointed: “Media and information literacy is the content area expertise of school librarians. They are trained to collaborate with classroom teachers. I see people in the chat talking about these skills and hope that your schools have certified school librarians.” I haven’t been a school librarian for a couple of years now, but I imagine my former colleagues are spending a lot of time working with their students and teachers on exactly this.

Right after the librarian comment, someone commented that they have a medical condition that makes it difficult for them to speak, so they use an AI tool to voice their videos for students. This was notable because it was the first time in the chat that someone had given a specific example of how they use generative AI to improve their teaching.

I was hoping there would be more discussion about either librarians or AI voice as an accessibility tool, but as Alex progressed through his presentation, the chat followed, and these topics were left behind. Next up someone brought up how some generative AI tools can be helpful for coding. Someone who was unfamiliar mentioned they had read that you have to be really careful when using AI for coding because it will “make stuff up.” This is known as an AI hallucination, and it does happen. It seems to me like I see a lot fewer hallucinations now than when I first started using chatGPT, but perhaps I have just become better at prompting. Anyway, there were some responses for and against, some based on experience and some not, and right when it got to the edge of a good discussion about the importance of prompt engineering for tasks like coding, the chat moved on again. I will speak from experience here: I have used ChatGPT and Bard for help with spreadsheet formulas and custom CSS for websites, and if you do not specify exactly what you want the formula/code to do and provide all the information needed, the formula/code it gives you probably isn’t going to do what you’re hoping. 

I want to pause briefly and mention something. There’s nothing disappointing or out of the ordinary about the examples and “almost discussions” I am highlighting here. The chats of online presentation and webinar events are always like this. People are trying to pay attention to the speaker, but they’re also trying to ask questions in the chat or reinforce a point that’s been made, so these little micro discussions are simply the norm. As one of my colleagues likes to say, “The chat has a life of its own.” The fact that these discussions aren't fully actualized is not any fault of the participants; they’re doing the best they can.

For the next little while the comments were here and there and not focusing on any specific area, and then Alex mentioned that generative AI would be useful as an accessibility tool. He mentioned neurodivergent students and someone immediately asked how AI could help these students. Someone responded that they were AuDHD and how they struggled with writing as a student, and how generative AI might have been a great tool to direct them to a starting point. Several other people suggested resources such as goblin.tools or articles, etc. Once again we got to the edge of good discussion, but then everyone moved on.

The presentation wrapped up shortly after this and people were getting in their parting shots: sharing a resource they liked, making pro, anti, and everything in between statements about AI, but mostly thanking Alex and Common Sense Education for the webinar. Now it’s time for me to fire my parting shots as well: Why am I writing about the chat of this webinar in particular? What struck me about this audience, as I pointed out earlier, is that it was fairly large (565) and very diverse in terms of location, grade level, and job title. In other words, as good a cross section of people working in education as you’re likely to find. 

AI is getting a lot of attention right now and from a wide range of people: tech executives, politicians, mainstream and education specific media, and plenty of everyday people. The more extreme voices tend to get more attention than others, from the stark warnings about the future of jobs and education to the perhaps overly optimistic idea that it's going to somehow save us all and create more jobs than have ever existed before. Here in this chat the people were mostly somewhere in between, and they were expressing all kinds of doubts, hopes, frustrations, successes, and resources. The point of all this is that no matter who or where you are in your AI journey, there are people who feel just as overwhelmed or confident or hopeful or worried or whatever combination of these as you. We’re all in this together.  We’re not all in the same place, and we probably never will be, but as long as we are willing to come together and learn from each other and trade ideas, experiences, resources, and more, I believe we are going to figure this thing out.

Previous
Previous

aiEDU Receives $2 Million Unrestricted Grant from Ballmer Group

Next
Next

Defining Artificial Intelligence: Why it’s hard and what you can do about it