A Useable Tool

In this project I took on the role of designing the tool itself. I enjoyed the task overall. It was fun to build something I might demonstrate to my coworkers and have my MET peers test if it actually worked. I also liked the process of iterating, adjusting, and seeing the tool take shape. Because we finished fairly early, we had the chance to go back and strengthen what we had and think more critically about why we chose the format we did. This resulted in a few changes along the way and a bit of productive group tension, nothing new in the realm of collaborative work.

My first attempt was to create a manual of sorts. I started by searching for existing instructions on the topic. I even panicked at one point when I saw really great instructions on Reddit, wondering if we were just reinventing the wheel? A possible connection here is Woolgar’s (1990) point about usability being tied to assumptions about who the “user” is. Our version wasn’t going to be about generic instructions, it was about creating something tailored to our specific user. From there, I wrote an offline step-by-step set of directions with screenshots that guided a user through the flow of creating a custom GPT. That manual helped me test a basic layer of usability… could the instructions be followed and where was the friction between the tool and the user? Next, I used that file to train the custom GPT. It raised a new usability challenge I hadn’t considered before. How do I design something that adapts to the user while also making sure it stays within the boundaries of its own instructions? The inception-like task of creating a GPT that could teach someone how to create a GPT became its own lesson for me in prompting. I went through multiple iterations of my prompt design, adjusting the tone, the instructions, and the flow each time. To connect to our scholarship, each round felt like a feedback loop in Issa and Isaias’ (2015) sense, usability emerging through cycles of testing and refinement rather than from a single design choice. We really enjoyed this version of the tool. Another positive usability feature with the custom GPT was that it adapted to the user. If the user decided to stray from the flow of instructions, the GPT would always bring them back. However, one of the group members could only go about 3 prompts deep before she was forced to pay. This experience led us full circle, we needed to develop something more accessible. I took it upon myself to use our custom GPT to develop the static clickable flow version 2 of our tool. It was a smoother experience, more visual, and overall easier to follow than the manual and open for use to everyone.

Looking back, I see two main takeaways. First, static tools (e.g., manuals and clickable flows) are helpful, but I think the future of technology usability in education is in adaptive tools that can respond while still staying intuitive to the user. Second, these kinds of tools can shift where support happens. With thoughtful and pointed integration, I believe certain adaptive tools (e.g., Notebook LLM) can shoulder the burden of routine guidance. In the world of increasing class sizes, this might free teachers to spend more time supporting the students who need it most. 

Previous
Previous

IP 5 - Algorithms