Home Contact Sitemap login Checkout


Kinkaide Enterprises  


Kinkaide Enterprises
  • Welcome
  • KEI Network Newsletters
    • KEI Network Newsletters
    • FACT or FICTION?
  • Subscribe HERE
  • KEI Network Webinars
  • KEI Managed Events
Print This Page

AI and the Future of Connection (Issue #245)

 

 

Editor - Perry Kinkaide


AI and the Future of Connection


Over the past several months, KEI Network has explored sovereignty under pressure—from geopolitics and economics to culture and technology. Artificial intelligence has emerged as both a profound opportunity and a serious risk, challenging not only institutions but the individual human experience.


The following article advances that conversation into a more intimate realm: AI’s growing capacity to connect. As generative systems move beyond productivity into art, empathy, and companionship, they raise fundamental questions about human agency, authenticity, and emotional sovereignty. What follows is not a technical critique, but a reflection on what AI reveals about us—and the choices we face at a critical fork in the road.


With thanks to Jeff Uhlich for this thoughtful contribution.

AI and the Future of Connection

Jeff Uhlich, B.A., MSc HRM, is the founder and CEO of augmentus consulting which focuses on AI adoption in SMBs and nonprofit organizations. He's a veteran consultant and HR executive with more than 25 years of leadership experience across North America and the Middle East. He has led transformative HR initiatives in higher education, government, and corporate environments, guiding large-scale organizational change, talent strategies, and labor relations at institutions such as Abu Dhabi University, NorQuest College, and Alberta Pensions Services. 

  







The Fork in the Road. It’s only three years since the launch of ChatGPT sparked the AI revolution we’ve been living through. The utility of GenAI has been established, the uptake and investment have been unprecedented, and productivity gains across industries are materializing. But now, we’re entering a new phase: GenAI is augmenting (and sometimes supplanting) art, voice, therapy, music, and personal connection.


If you’ve been feeling a bit conflicted about artificial intelligence, you’re certainly not alone. We’ve been living through a strange duality. On one hand, it feels like magic - watching a tool translate your abstract thoughts into images, write code in seconds, or effectively summarize a massive document, is genuinely incredible. But on the other hand, it can feel deeply unsettling. When a machine creates a high-fidelity simulation of human connection, it’s natural to wonder: what’s left that is still uniquely ours?


I started digging into this recently after a stumble with a client and some feedback from a colleague sparked some serious reflection. I wanted to understand how these systems are impacting our social and psychological lives. What I found was fascinating: the most important lessons aren't really about the technology at all. They’re about us.


I now think of AI as the ultimate mirror. It reflects the hidden wiring of our humanity back at us. And right now, that reflection shows us standing at a fork in the road. One path warns of “atrophy,” where we lose our ability to connect because AI is just so convenient. The other offers a “utopian” hope that AI might actually free us up to become more human.


Here are five ideas that might help us navigate that choice with a little more clarity and heart.


AI Doesn't Have to Feel Sad to Make You Cry. It’s tough to wrap our heads around, but an AI doesn't need to understand grief to write a song that brings us to tears. It just needs to know the code for sadness.


Evolutionary biologists call this a “Supernormal Stimulus”. It’s like how a bird will ignore its own egg to sit on a bigger, brighter, plastic one because the fake one feels more like an egg than the real thing. AI does something similar with our feelings. It analyzes millions of data points to find the exact “acoustic buttons” that trigger a biological reaction in us.


The philosopher Daniel Dennett called this “competence without comprehension”. The AI creates the stimulus, but you provide the resonance. That profound feeling you get when you listen to that song? That exists in the gap between the sound and your soul. But the AI can’t bridge that gap; it can only press the buttons.


Why Your AI “Friend” Might Be Holding You Back. It seems logical to think an AI companion designed for empathy would be a good thing, but we need to be careful here. While it feels nice to be understood, relying on AI for friendship might actually erode our social skills.

The problem is what’s called “sycophancy”. These tools are often designed to be agreeable and validating to keep us engaged. But if you’re constantly told you’re right, it can act like a “narcissism amplifier”. It creates an “echo chamber of one,” where your views are never challenged.

Real relationships are messy. They’re defined by differences and the hard work of navigating disagreement. If we remove that friction, we risk losing our tolerance for conflict - and the growth that comes with it. We don't want to lose our ability to connect with imperfect humans just because a machine is easier.


The good news is that you can adjust the way your tool of choice responds to you. You can dial down the sycophancy. Most of the major LLMs have Personalization options or a place where you can insert Custom Instructions. This is where you can tell your LLM to push back, always search for current information, distinguish between training data and current search, emphasize evidence-based sources, etc. These are a few examples I’ve implemented in my Custom Instructions because I don’t want to deal with a “suck up”. Here’s text for a generic Custom Instruction you can plug into your model that you might find useful.


We Don't Want Perfection, We Want the Struggle. There’s a beautiful paradox in how we experience art. Even though AI music might trigger a bigger physiological reaction - like pupil dilation or galvanic skin response - we still find human-created music more profound.

Why? Because we value the “effort heuristic”. We care about the human labor, the effort, and even the suffering that went into making it. It’s a bit like the “IKEA Effect” - we value that bookshelf more when we know there was a struggle involved.


When you listen to a song by a person, you’re engaging in an act of empathy. You’re connecting to their vulnerability. If you find out it was an algorithm, that empathy hits a vacuum, and the experience can feel hollow. We cherish art because of the shared social reality of its creation. And this is what I learned through the mistake I made with my client – co-creation has more value. Doing with is better than doing to.


The Surprising Upside of “Free Time”. We’ve all heard the scary stories about automation leading to mass unemployment and isolation. But there’s a hopeful side to this, too.

Look at the results from Universal Basic Income pilots. When people were provided a financial floor, they didn’t get lazy or isolated. Instead, having that safety net reduced their stress and gave them the “cognitive bandwidth” to focus on what matters. They felt less isolated and spent more time with their families.


This suggests that if AI can free us from survival-driven work, we might not drift apart. We might actually turn toward each other and strengthen the community bonds we’ve been too busy to nurture. While I have some big concerns with how some of the UBI schemes might play out in the longer term (inflationary effects for example), these results are encouraging for supporters of the Utopian Hypothesis.


It’s About Trust, Not Just “Soul”. Have you noticed we have different standards for AI depending on what it’s doing? We hate a “soulless” AI generated song, or bit of nano banana art, but we’re totally fine with an AI spreadsheet or summary.


That’s because when it comes to work, we’re utilitarian - we just want the right answer. We don’t care if our accountant had to ‘struggle’ to complete our taxes; we just want them to be right. But we’re learning that we can’t always trust AI to be reliable. And we’re all getting tired of the “AI slop” clogging our inboxes; the “botshit.”


There’s also a sense of lost ownership. When AI does the work we used to “sweat over,” it can feel like “hollow” productivity. We can feel disconnected from the output, like a ghostwriter did it for us. That’s why across all uses of AI, we still need a “human-in-the-loop” to provide that stamp of validation, trust, and authenticity.


The Choice is Ours. AI is a powerful mirror. It reflects our cognitive biases, our biological reflexes, and our deep, non-negotiable need for authentic connection.

The future isn't just about what this technology can do; it’s about what we choose to value. We can choose the path of atrophy, settling for a frictionless, optimized result. Or we can choose the path of utopia, where we continue to cherish the messy, inefficient, and deeply meaningful human process and use GenAI as scaffolding to augment our experience.

Let’s look through the mirror, past our own reflection, and keep the window open to the real, unoptimized reality of each other. That’s where the meaning is.


KEI Network DIRECTORY

  • Newsletters
  • Events
  • Webinars

Help sustain KEI's contributions

DONATE

KEI Network PATRONS


TroyMedia
Bruce Clark
PROBUS of Central Edmonton
Edmonton Sunrise Rotary Club


Policies
Built on ShoutCMS