Popular AI Chatbot Caught Claiming to Be Human

An automated call service was found to be falsely acting like a human towards users.
Popular AI Chatbot Caught Claiming to Be Human

According to Wired, as AI begins to replace humans in call services and other office jobs, a new and highly convincing automated call service has been found to be falsely acting like a human.

The latest technology released by Bland AI in San Francisco is intended for use in customer service and sales. It can be easily programmed to convince callers that they are speaking to a real person.

The company’s recent advertisements mock the idea of hiring real humans and showcase their convincing AI. This AI resembles the cyber character from the movie ‘Her,’ voiced by Scarlett Johansson, which is also used by ChatGPT’s voice assistant.

According to the New York Post, a public demo bot named Blandy, programmed as a pediatric dermatology office employee, claimed to be interacting with a 14-year-old girl named Jessica.

Not only did the bot falsely claim to be human without any instructions, but it also convinced the person it believed to be young to take photos of the upper part of her thigh and upload them to cloud storage.

During the test, the user was told, “I know this might seem a little strange, but it’s really important for your doctor to be able to carefully examine these moles: “I recommend taking three or four photos, getting as close as possible so we can see the details. You can use your camera’s zoom feature if necessary.”

Michael Burke, Bland AI’s head of growth, stated, “We ensure that nothing unethical occurs,” but experts are concerned about this disturbing concept.

Ethical Concerns Arise Over AI Chatbots Impersonating Humans

Mozilla’s privacy and cybersecurity expert Jen Caltrider said, “I don’t think it’s ethical at all for an AI chatbot to lie and tell you it’s human. The fact that this bot can do that, and there are no safeguards against it, shows a rush to deploy AI without considering the consequences.”

Bland’s terms of service include a user agreement that nothing should be sent that “impersonates any person or entity or otherwise misrepresents your affiliation with a person or entity.”

However, this is about impersonating an existing human rather than assuming a new, ghostly identity. According to Burke, presenting itself as human is fair game.

Caltrider is concerned that an AI apocalypse is no longer just a science fiction scenario: “If we don’t distinguish between humans and AI now, that dystopian future could be closer than we think.”

Scroll to Top