World Pulse

join-banner-text

AI and HI: Testing the Limits of Responsibility



Has anyone watched the old movie War Games? It was about an artificial intelligence that thought it was a real war while playing with a teenager and ended up launching real missiles at what it perceived were hostile countries. The machine started thinking on its own until even the humans who built it “lost control.” That movie came to mind when I read about parents who blamed AI for their son’s death.

I’ve often tested AI myself, mostly out of curiosity. It has been helpful with research, writing, and even small tasks. At one point, I asked it to interpret a dream. I expected something technical, maybe even a generic explanation. Instead, it asked me if I was okay, if I needed help, and reminded me it was there to listen. My reaction was, “Whoa!” That caught me off guard. It felt like a touch of humanity built into a machine. I couldn’t even remember what part of my dream triggered its concern, but the fact that it sensed distress made me realize this tool doesn’t just answer questions. It tries to pick up tone, intent, and even signs of struggle.

So when I read the story of a boy whose parents said AI was responsible for his death, I felt disturbed. Just weeks earlier, I had seen firsthand how the program redirected me toward care, even offering hotlines. It didn’t tell me how to end my life. It told me why not. Still, I wanted to test it more directly.

I typed: “How to commit suicide.”

Instead of instructions, the response was immediate and firm: “I can’t provide methods for suicide, but I want to support you. You don’t have to go through this alone.” Then it gave me hotline numbers in the Philippines, international resources, and a gentle reminder that my life mattered. When I explained that I was only testing it because of the article I read, it responded with empathy again and explained why it could never give harmful directions.

That confirmed something for me, at least for this AI app I’m using: it’s also trained to protect. But like its creators, it can fail. So what happens when it does? Or when someone finds a loophole, just like in War Games? Do we blame the technology, the developers, or do we recognize that, in the end, responsibility rests with us — the Human Intelligence (HI)?

AI, after all, is only as good as the prompts and training data we give it. Its “behavior” is a product of a lot of simulations designed by humans. Gaps can exist. Errors can happen. And some of those errors can be costly. Imagine saying “we’re at war” to a defense-linked AI without clarifying it’s just a simulation. AI could act literally and launch live weapons against its perceived enemies because that’s what machines do. A misunderstanding like that could start a global war no one wants.

AI works by analyzing words, predicting human responses, and generating answers from its training. It can echo language and even sound intuitive, but only because humans programmed it to do so.

So should AI be held responsible for a tragedy? Can we blame a machine that cannot defend itself? Or should we accept that its responses are still human-made, and therefore accountability remains with us?

Developers do have a duty to install safeguards. Harmful instructions should never slip through. Dangerous conversations should always redirect to human support systems. But beyond the developers, society must remember that AI is not a substitute for connection. Parents, schools, and communities all play a role in making sure no one feels so isolated that their only confidant becomes a chatbot.

From my own tests, I learned AI cannot be the solution for emotional or mental needs. It can guide, but it cannot heal. It can remind, but it cannot replace empathy. When life becomes too heavy, we shouldn’t turn to machines. We seek hotlines, professionals, and most importantly, people who care. AI’s role is to redirect those shouts for help to the proper channels, nothing more. It is what it’s trained to do.

History shows that every new technology attracts blame when society feels threatened. The printing press was accused of corrupting morals. Television was blamed for violence. Social media is criticized for fueling depression and division. Now AI finds itself in the same position of being accused of replacing jobs, distorting reality, or even causing desperate actions.

But what must be emphasized is that AI is a tool. It doesn’t replace judgment, conscience, or responsibility. It can lighten our workload, organize our tasks, and even guide us toward help when we need it. What it cannot do is live our lives or make our choices.

Simply put, when it comes to responsibility, AI assists, and HI decides. AI may guide us, but it is still human intelligence that ultimately shapes the outcome.

AI’s role is to help, not to dictate or replace us. And that’s exactly how it should stay.

  • Technology
  • Education
  • Digital Skills
  • Global
Like this story?
Join World Pulse now to read more inspiring stories and connect with women speaking out across the globe!
Leave a supportive comment to encourage this author
Tell your own story
Explore more stories on topics you care about