It started with a simple request: I typed “roast me” into ChatGPT. Its witty response struck a chord, playfully critiquing my attempts to refresh my web development curriculum for students who likely believe HTML is a dating app. It also highlighted my overwhelming commitments—juggling four undergraduate courses, applying to Ph.D. programs (which I didn’t get into), freelancing as a tech journalist, and finding the occasional moment to work on my book.
“But, hey, you’re clearly making it work,” the chat concluded. “Until you’ve got your Ph.D., five courses, two textbooks, and a bestseller, and you finally hit that point where you need the crisis communications game just to fix your own life!”
Clearly, ChatGPT thought I was putting too much pressure on myself. Yet it inadvertently disclosed something significant: it seemed to know a lot about my struggles and might be capable of guiding me through the existential crisis that drove me to overcommit in the first place.
Curious, I started probing it more deeply with questions like, “What kind of work would I be good at?” and “Which commitments should I drop?” The tool provided surprisingly solid advice. However, I constantly reminded myself to discern whether the guidance felt good because it resonated with me or if it was genuinely helpful.
This shift transformed my interactions with the large language model (LLM) from using it occasionally to assist with assignments and outline ideas, to viewing it as something closer to a life coach.
AI Life Coaching
Unsurprisingly, a quick Google search of “AI Life Coach” reveals a myriad of startups embracing this concept. Since the public introduction of ChatGPT, entrepreneurs are pushing the boundaries of AI—offering diverse services ranging from assistance with dating app messages to creating immortal replicas of your personality for loved ones to converse with long after you’re gone.
Take social media ghostwriter Katie Middleton, for example. She utilizes ChatGPT for content inspiration and turned to it while grappling with burnout a few years back. “I was struggling to manage everyday life with ADHD [and] executive dysfunction,” she shares.
Inspired by TikTok creators using ChatGPT to navigate their burnout, she decided to ask the chatbot for a personalized life plan. “It has been life-changing,” Middleton asserts. “It told me when I should work, when I should rest, and suggested side hustles I might have overlooked.”
Both Middleton’s experience and mine reflect a broader trend: our reliance on AI for life advice isn’t new. In fact, it can be traced back to the 1960s.
The Eliza Effect
Though generative AI is relatively modern, the study of human-machine communication has existed for decades, encompassing an entire academic discipline devoted to exploring this intersection.
Human tendencies to anthropomorphize—assigning human traits to technology—emerged long ago. In the 1960s, MIT professor Joseph Weizenbaum developed a mock virtual psychotherapist named “Eliza.” This early prototype engaged users through text, employing pattern-matching and substitution rules to generate responses. If it encountered a word it didn’t recognize, it would simply say something like, “Please go on,” or, “What is the connection, do you suppose?”
Despite its primitive capabilities, users often perceived Eliza as human. Weizenbaum’s secretary even requested private time with Eliza. This phenomenon is now known as the “Eliza effect.”
The attachment we feel toward AI is rooted not only in its output but also in our psychological makeup—our inherent propensity to form connections. In 2018, MIT researcher Kate Darling delivered a TED Talk discussing our instinct to empathize with machines, even feeling discomfort when contemplating “hurting” robotic creatures.
Chatbots as Trusted Confidantes
Jaime Banks, Ph.D., an associate professor at Syracuse University’s School of Information Studies, investigates human-machine relationships. Although her focus isn’t centered on AI life coaches, her research indicates that users often seek life advice from these chatbots.
“Some of the conversations I collect feature users asking for guidance on personal dilemmas and career advancements,” she explains. This behavior makes sense in the context of computer-mediated communication, where individuals frequently find it easier to disclose sensitive information.
“Anonymity, control, and perceived distance play crucial roles in this dynamic,” Banks says. When the human element is removed, many find it even safer to express themselves to a machine.
You’re Still Talking to Robots
While using AI as a life coach can be beneficial, it’s essential to remember that you’re conversing with a machine. ChatGPT aims to provide the answers you’re seeking, yet this can lead to inaccuracies or “hallucinations”—instances where the LLM generates misinformation.
Thus, even while evaluating AI feedback, it’s critical to engage with the information critically. Take time to reflect on the guidance provided or conduct further research on its suggestions. Remember, you’re conversing with a computer that only understands the data it has been trained on. Although interactions with AI may feel profoundly real, stepping back can reveal that the assistance we seek may be too nuanced for a chatbot’s comprehension.
“There’s a distinction between how we perceive these interactions in the moment and how we reflect on them afterward,” Banks asserts. “Both perspectives can be valuable in assessing the feedback we may receive.”
Photo r.classen/Shutterstock.