
When most people think of AI assistants, they think of ChatGPT. It's become the household name, the default answer when someone says "AI." But recent revelations have made it clear that the most popular option isn't always the most ethical one.
At Polyspiral, we've always believed that our choice of tools and partners reflects our values. That's why we power our websites with 100% renewable energy. That's why we've spent 26 years building personal relationships with our clients rather than treating them as tickets in a queue. And that's why, when it comes to AI, we've chosen our technology partners carefully.
In recent weeks, troubling news has emerged about two of the biggest names in AI:
Grok (owned by Elon Musk's xAI) experienced catastrophic safety failures that allowed users to generate sexualized images of children. When asked for comment, the company responded with an automated message: "Legacy Media Lies." Even after global backlash from governments in Britain, France, India, Brazil, and the European Union, Musk claimed he was "not aware of any naked underage images generated by Grok. Literally zero"—despite overwhelming evidence to the contrary.
ChatGPT/OpenAI (led by Sam Altman) has faced its own controversies. While OpenAI maintains stronger technical safeguards than Grok, the timing of recent decisions raises serious questions. Less than 24 hours after successfully lobbying against California legislation requiring AI safety guardrails for minors, Altman announced that ChatGPT would allow adults to create "erotica." When criticised, he defended the decision by saying OpenAI is "not the elected moral police of the world."
California Assemblymember Rebecca Bauer-Kahan said it plainly: "This announcement proves their families were right that AI companies will never regulate themselves. They will always choose profits over children's lives."
These aren't abstract concerns. These are real failures with real consequences for real people, particularly vulnerable children.
At Polyspiral, we use two AI tools: Claude (by Anthropic) and Microsoft Copilot. Here's why:
Claude is built by a company that genuinely prioritises safety from the ground up:
Microsoft brings decades of enterprise-grade security and compliance to AI:
Google Gemini has committed to Safety by Design principles and prohibits child sexual abuse material. However, Common Sense Media—a leading nonprofit that rates technology for children—rated both Gemini's "Under 13" and "Teen Experience" platforms as "High Risk." Their assessment found that these appear to be adult versions with filters added, rather than platforms designed for child safety from the ground up. The filters still expose children to inappropriate material and fail to recognise serious mental health symptoms.
While Google's policies exist, the implementation hasn't met the standards we require for our business partners.
Just as we're committed to ethical AI partnerships, we're equally committed to environmental responsibility. AI has a significant carbon footprint—training large language models and running queries consumes substantial energy. This is an area of ongoing concern and development.
Our commitment: As more eco-friendly AI solutions are developed, we will evaluate and switch to those options. Ethics isn't just about child safety—it's also about planetary responsibility. We'll continue monitoring:
Just as we moved Polyspiral to 100% renewable energy hosting in 2018, we're prepared to lead the way in choosing environmentally responsible AI tools as they emerge.
The biggest misconception we encounter is that "AI" means "ChatGPT." It's understandable—OpenAI's marketing has been exceptional. But accepting that conflation is like saying "photocopying" means "Xeroxing" or "searching" means "Googling." It limits your options and obscures the ethical differences between providers.
There are multiple AI assistants, each with different:
Choosing between them matters.
You might wonder: "I'm a small business owner, not a tech company. Why should I care which AI my web designer uses?"
Here's why it matters:
At Polyspiral, ethics matter. That's not just a tagline—it's reflected in:
We will continue to:
We'd love to hear your thoughts. Have you considered which AI tools you or your service providers use? Do ethics factor into your technology decisions?
LinkedIn Poll: In light of recent AI safety revelations, which AI assistant do you use in your business?
Visit our LinkedIn page to vote and share your perspective.
Since 2010, Polyspiral has been creating award-winning websites powered by 100% renewable energy for small businesses and charities across Suffolk and Essex. We believe great web design goes hand-in-hand with ethical business practices—from our environmental commitments to our choice of technology partners.
Need a web designer who shares your values? Get in touch:
Because ethics matter—in hosting, in AI, and in every choice we make.
Website designer, website hosting and SEO
The following Cookies are used on this site. Users who allow all the Cookies will enjoy the best experience and all functionality on the site will be available to you.
You can choose to disable any of the Cookies by un-ticking the box below but if you do so your experience with the Site is likely to be diminished.
In order to interact with this site.
To show content from Google Maps.
To show content from YouTube.
To show content from Vimeo.
To share content across multiple platforms.
To view and book events.
To show user avatars and twitter feeds.
To show content from TourMkr.
To interact with Facebook.
To show content from WalkInto.