top of page
x-dreamstime_xxl_185129368.jpg
Colleen Carmean

When Everything Changes: Hello OpenAI



Here at Strategic Initiatives, we’ve spent considerable time in the last few months considering OpenAI’s November release of ChatGPT. Don released his reflection a few days ago; Tim and Linda are busily writing, and here’s my own awed reaction.


Often in the last 25 years, technology has changed our lives. For many of us, we needed to be convinced and figure out for ourselves what would change with adoption. Laptops, smartphones, wearables, 3d printing, WiFi, Alexa, Siri, blockchain, ebooks, and streaming music services come to mind. As Amber Case told us so long ago, “We are all cyborgs now,” and we have already merged with our machines.


When OpenAI released ChatGPT to all, even in a beta form, my world changed. I gasped and instantly recognized that we weren’t going to be able to put this innovation back in the box or ignore it. If you haven’t experienced the "chat generative pre-trained transformer," visit their site (https://chat.openai.com/chat). In moments, it will answer your queries to create an essay, poem, song, or term paper. It can do it in the voice of Virginia Woolf, Bob Dylan, or a third grader. It remembers your prompts and revises its responses. With citations. In well-crafted sentences and paragraphs. It solves complicated math problems and generates computer code.


The implications hurt my head to imagine, and most of us gasped at first sight. For many, it was a gasp of awe and admiration; for others, fear and loathing. No one knows how our lives will change, but the diversity of reactions came fast and furious.


For the education sector, The Atlantic quickly declared the college essay dead (Marche, S. Dec 6, 2022). Inside Higher Ed took a bit more time to collect a few reflections on how to move ahead without panicking. Colleagues on my campus spread the URL during winter break and collectively declared that barbarians were at the gate. A few researchers have already begun using them to draft articles, proposals, and projects. Scholarly publications are gasping in response.


Once in a while, in a moment like this, we realize our world, our assumptions, jobs, and identities have changed. Usually, we are not happy when that change happens. Change is hard, transitions can be brutal. And a 22-year-old who spends a weekend becoming famous for creating an anti-AI algorithm by looking for similar sentence lengths won’t stop OpenAI student use. My students already know how to pad a few essay sentences to meet sentence variation requirements and do so as easily as they change margin size and font.


Similar to the 1970s when graphing calculators changed math classes (after a few years of resistance and school policies banning their use), generative AI will change education, knowledge generation, and evidence in learning outcomes. We should gasp, adapt, and then prepare our students for what’s ahead.


AI can now do the work previously thought of as higher-level reasoning (crafting, coding, composing, writing, translating). OpenAI itself says, “However, it's important to note that AI is not expected to replace human workers entirely, but rather augment and assist them in their tasks.” Is this AI magical thinking, or are tech bros and billionaires luring us into false complacency, just the way they did with social media and crypto? Time will tell, but, regardless of intention, the nature of work will change, jobs will be lost, and new jobs will emerge.


Of great concern should be how this will happen. OpenAI is being funded with billions of dollars from founders who should give us pause, including Elon Musk, Peter Thiel, Amazon, and Microsoft. They want their money back, with a big ROI, and it’s doubtful that the current test phase of OpenAIChat and DALL-E (its image generator) will remain cost-free. "I'm sorry, Dave, I'm afraid I can't do that."


OpenAI is not being developed for social good, and its funders have a history of disregard for the well-being of our diverse populace. OpenAI’s work is brilliant and built with the intent of making money for its investors. Based on history (including the non-action of our government in all things tech), we have a reason for concern. Collectively, we should put our human heads together, ask more of our morally suspect tech bros, leverage knowledge-gathering AI, and embrace this transformational change.


Shape the change, create use cases, and prepare for the disruption. What other choice do we have?



Comments


bottom of page