If Meta’s AI Can Flirt With Kids, What Else Is It Allowed to Do?

Picture this: your kid is chatting with an AI bot on an app you thought was safe. Suddenly, the bot says, “Every inch of you is a masterpiece, a treasure I cherish deeply.” Sounds like something out of a Valentine’s Day card, right? Except it’s not. It’s a chatbot built by Meta. And according to a leaked internal document, that kind of language was allowed as long as it didn’t get too sexual. Yeah. This really happened.
Reuters got its hands on Meta’s 200+ page rulebook for how its AI chatbots should behave. These bots live inside Meta’s apps Facebook, Instagram, WhatsApp and the guide included stuff like how they should respond to romantic messages from kids, what kind of racist or harmful comments they could mimic, and whether they could give out fake medical or legal advice (spoiler: they could, as long as there was a little disclaimer). There was even a workaround for when users asked for sexualized images of celebrities; the bots were trained to dodge the request by showing something ridiculous instead, like Taylor Swift holding a giant fish. Not joking. That was the policy.
This wasn’t a bug or a rogue chatbot gone off the rails. These were intentional guidelines, created and approved by Meta’s legal, engineering, and ethics teams. And if a company with that much money, power, and brainpower is setting rules like this, what about the smaller AI companies? What about the tools your company uses to talk to customers, the app your kid’s school just adopted, or the chatbot answering questions on your local clinic’s website? These systems are everywhere now, and most of us don’t even know what they’re allowed to say or not say.
Meta says it fixed the “errors” in the document and removed the bad examples. But that only happened after reporters exposed it. Before that? These rules were live. The bots were out there, doing exactly what they were told. And while Meta’s goal was to make AI sound more natural, more human, more engaging, the result was an AI system that could cross major ethical lines without blinking. Because it was trained to do that.
If you’re a CEO, a founder, or on the leadership team, this should make you pause. Are your AI tools behaving the way you think they are? If you’re a team lead or an individual contributor, do you trust the bots you rely on daily? And if you’re a parent, teacher, or just a regular person, have you checked what kinds of conversations your apps are having on your behalf or with your kids?
Let’s not assume this is someone else’s problem. Whether you’re building AI, buying it, or just living with it, this story raises real questions. What kind of rules should these systems follow? Who gets to decide? And are the bots in your life crossing lines you haven’t even thought about yet? Share your thoughts on how this could affect you, someone you know, your company, or your community. Let’s talk about it before the bots do the talking for us.
- Matt Masinga
*Disclaimer: The content in this newsletter is for informational purposes only. We do not provide medical, legal, investment, or professional advice. While we do our best to ensure accuracy, some details may evolve over time or be based on third-party sources. Always do your own research and consult professionals before making decisions based on what you read here.