We’ve done our share to criticize the use (and abuse) of AI (as well as the assumptions baked into the term “artificial intelligence”). In many cases, the best choice for incorporating chatbot technology into your work and life is to avoid it altogether. Think about all the skills you will lose from using chatbots and virtual assistants to write your emails, draft your presentations, and write your notes. The little time you’ll gain from these uses won’t be worth the independence (or humanity) you’ll lose.
All the same, it’s ridiculous to say that these tools are useless. Machine learning is a powerful tool that can transform important fields, along with our personal lives. Saying that there’s no good to it at all is false. Of course, it’s just as false to say there are no downsides.
I fear the possible consequences of upcoming developments in these fields. But below these big societal changes, what should we do as individuals? There are many risks to using these technologies in daily life. But where should we fit ourselves in their development? What does a good future with AI look like, and is it possible to be a virtuous part of these changes?
When we’re thinking about future development of these fields, it’s important to understand what they’re promising in the first place. We can imagine a future where AI technology goes to good use. But when we’re thinking of these benefits, we have to think about the greater end as well. What does it mean for technology to be good? How can it serve us well?
What Good Is A Tool?
At heart, these technologies are just another tool. They are complicated tools with many different uses, but they are tools nonetheless. How should we understand the positives and negatives of a tool?
There’s one thing to keep in mind: a tool is neutral. A hammer can be used for good and evil, and it’s odd to call it either good or evil on its own. Responsible use falls to the user, not the object.
All the same, we need to recognize that our tools do change us. A car, on its own, can’t tell you what to do or do any sort of evil. But a car will encourage certain kinds of behavior. When you have a car, you’ll feel less of a need to shop locally—your grocery store doesn’t have to be a short walk away when you can hop on the highway and drive to one miles away. You can visit far away friends and family, and perhaps you’ll feel better moving far away from them. The car didn’t force you to do anything. But it encouraged certain things and discouraged others in a very real way.
It’s easy to dismiss the bad consequences of a tool by saying they’re optional. If you buy a record player, that doesn’t mean you have to quit singing and playing music with people. But it certainly makes it easier. Image-generating programs don’t mean you have to quit drawing or taking photos, but they certainly encourage you to spend your time doing something else instead.
But it’s important to take a balanced perspective here. One side may present new technology as entirely inevitable and irresistible. There’s no way to stop progress, so why fight back? On the other hand, some may say that personal responsibility is enough to make any technology good. You’re free to use these tools any way you want. Maybe there are bad ways to use hammers, cars, or chatbots, but we’re always free to choose otherwise.
Where’s the balance between these two? There’s a way to direct just about every tool towards the good. Perhaps there are some tools that can hardly ever serve the good—maybe there’s no good way to use a guillotine. Still, there’s some way to make all these tools useful. At worst, maybe a guillotine serves as a good decoration or a fun party trick.
We can identify goods to just about any tool. It’s not hard to think of ways for these technologies to be used for good: advances in things like medicine and science are obvious positives. But many of the benefits cited are more intangible. Things like “economic benefits” and “convenience” might sound good, but it’s hard to nail down just what these mean. What makes these things worthwhile in the first place?
Efficiency for Efficiency’s Sake?
When we think about efficiency, profit, and other common economic goods, we always need to ask: what greater good do these serve? What’s the point of a profit that makes everyone more miserable? What’s the point of convenience if there’s nothing to be gained from the time saved?
Some thinkers would use the term “instrumental good” for this. Metaphorically, we could call them stepping-stone goals. We don’t choose these for their own sake. We choose them because they can help lead us to something greater. Things like money, efficiency, or convenience are good because they help achieve other meaningful goods. Companies like OpenAI promise huge economic growth. But we have to ask: what are the benefits of this growth? What will the average person gain from these? (Especially when automation threatens to end many jobs.)
I’m a big fan of board games. Recently, I’ve had the strange experience of players I didn’t know well joining my friends for a game and using ChatGPT to decide on their moves. What good is being done here? What’s the point of getting a machine to play your games for you?
It’s healthy to be a little skeptical of some of the grand claims about AI curing cancer or getting us to Mars. But this is just factual skepticism. It’s important to keep a sense of value skepticism as well. Ask: how will this make lives better? What’s the point of pursuing this? It’s certainly cool if we can generate images or funny poems on demand, but it’s hard to imagine that we’ll do anything better if we quit spending time drawing and writing.
We want to think about unconditional goods: the things that these instrumental goods aim for. In the end, all of these goods need to go back to human well-being. If they aren’t making people happier and more fulfilled, there’s no real reason to go after them.
A quick look at some of the proposed goods shows that many of the benefits of machine-learning technology do meet this standard. Medicine obviously serves the human good. The pursuit of truth in the sciences is worthwhile on its own. On the other hand, it’s unclear how many of the “conveniences” afforded by AI tech do anything to help the human good. They don’t get rid of real “wasted” time—if reading a book or playing a board game is “wasted” time, then what are we doing with our lives?
All of these improvements need to be understood next to human happiness, fulfillment, and flourishing. Progress for progress’s sake is pointless. Genuine progress comes when we get everyone closer to answering, “What is my life for?”
This has been quite the week for me. On September 27th, I got married and moved in with my wife, throwing everything out of order in the process (we’re waiting for a honeymoon until my wife’s semester ends)—I’d consider that quite a bit of progress on the question, “What is my life for?”
I have spent the week dealing with reorganizing, unpacking, and a landscaper breaking my internet connection with a weedwhacker, all while finalizing details on the cover and layout for our upcoming book, From Work to Vocation, and applying for a scholarship to continue The Vocation Project’s research in grad school. This post, meanwhile, has grown in scale from a quick 1,000-word article on what a good AI developer would live like to the first part of a series of posts on AI, flourishing, and the workplace. In summary, this has been a disorganized time, but I hope this article offers some organized thoughts.
With that in mind, I’d also like to ask for your support in continuing this work. If you’ve liked our writing and want to see more like it, consider upgrading to a paid subscription (we’ll send you a free copy of our book for subscribing), buying our book when it comes out, or just sharing and commenting to help boost this post.
Congratulations on your wedding and future endeavors. May you live long and prosper.
On the personal update portion- So happy for both of you! What an exciting journey ahead.