I dictated most of this article using AI voice-to-text. The irony is not lost on me. The very technology I am about to examine for its risks is the technology that allowed me to write about it faster and more naturally than I could by typing. That tension - between genuine utility and genuine concern - is the entire point of this piece.
At Muuvment Labs, we believe artificial intelligence is the most consequential technology of our generation. We help companies implement it. We build production AI systems. We see what it can do when it works well. But we also see what happens when it is deployed without thought, without governance, and without honesty about its limitations and dangers.
This is our position on responsible AI. Not a marketing statement. Not a list of principles we put on a wall and forget. This is what we actually think, informed by the conversations we have every day with founders, executives, regulators, and the people whose jobs and lives are being reshaped by this technology.
The Risks Are Real and Growing
Let me be direct about what concerns us. These are not hypothetical scenarios from science fiction. They are happening now.
Autonomous Weapons and Lethal AI
In 2020, during Libya's civil war, a Turkish-made Kargu-2 drone autonomously hunted down and attacked a human target - what a UN Security Council report described as potentially the first confirmed autonomous weapon kill without human intervention. Since then, the trajectory has only accelerated.
In the Russia-Ukraine conflict, swarms of self-detonating drones have been deployed to autonomously patrol and attack infrastructure and personnel. Ukraine's Operation Spiderweb in June 2025 demonstrated drone swarms launched from commercial trucks that identified and struck military installations without real-time human input. Over 1,200 civilians have been killed by military drones targeting Ukrainian urban areas since January 2024 alone.
Israel's military operations in Gaza have brought autonomous weapons into sharp focus. The IDF's "Lavender" system, reported by +972 Magazine in 2024, used AI to generate a database of suspected militants - marking tens of thousands of Palestinians as targets with minimal human review. A related system called "Where's Daddy?" tracked individuals to their family homes, resulting in strikes on residential buildings. According to reporting, human officers spent as little as 20 seconds verifying Lavender's recommendations before approving strikes. The IDF also deployed "Gospel," an AI target-generation platform that dramatically accelerated the pace of airstrikes on buildings and infrastructure in Gaza. Former intelligence officers described the process as a "mass assassination factory." These systems represent a category shift: AI is no longer just guiding munitions to fixed coordinates. It is selecting who to kill.
The pattern extends further. South Korea has deployed autonomous sentry guns along the DMZ since 2010, equipped with heat and motion detection. The UK's Taranis drone, developed by BAE Systems, can autonomously identify and engage targets. Turkey's Bayraktar TB2 drones, which proved decisive in conflicts in Libya, Syria, and Nagorno-Karabakh, operate with increasing levels of autonomy. China has demonstrated autonomous drone swarms capable of coordinated attack patterns. The US Navy's Sea Hunter, an unmanned surface vessel, can patrol for months without a human crew.
In the United States, the debate has intensified around the relationship between AI companies and the military. In early 2025, Anthropic updated its acceptable use policy, explicitly permitting its Claude models to be used in military and intelligence applications - including those supporting weapons targeting. This marked a significant shift from the company's earlier positioning as a safety-focused AI lab. The decision drew sharp criticism from AI researchers, ethicists, and parts of the tech industry. It raised a fundamental question: if the companies building the most capable AI systems will not draw a line at weapons, who will?
Google faced similar internal conflict years earlier when Project Maven - a Pentagon contract to use AI for analysing drone surveillance footage - prompted mass employee resignations and protests in 2018. Google eventually chose not to renew the contract, but the work simply moved to other contractors. Microsoft, Amazon, and Palantir have all deepened their defence AI commitments since then. The commercial incentives are enormous: the global military AI market is projected to exceed $30 billion by 2028.
The international regulatory picture remains fragmented. Over 100 countries have participated in UN Convention on Certain Conventional Weapons (CCW) discussions on lethal autonomous weapons systems (LAWS), but no binding treaty exists. Austria, Costa Rica, and a coalition of states have pushed for a prohibition, while the US, Russia, Israel, and others have resisted binding restrictions. The International Committee of the Red Cross has called for new international rules, arguing that existing international humanitarian law is insufficient for weapons that select and engage targets without human intervention.
Our Position
Muuvment Labs will not accept contracts related to autonomous weapons systems, defence targeting, or any application where AI makes lethal decisions without meaningful human control. This is a non-negotiable line for us.
Surveillance and the Erosion of Privacy
The scale of AI-powered surveillance has expanded dramatically. Approximately 1,500 US law enforcement agencies now operate drone programmes - a 150% increase since 2018. Modern police drones can monitor biometric data from over 1,600 feet away. ICE agents now use mobile facial recognition apps that allow field identification simply by pointing a phone at someone's face.
Beyond facial recognition, systems like Veritone Track - used by over 400 customers including police departments, federal agencies, and universities - can track individuals by body size, gender, hair, clothing, and accessories, deliberately sidestepping facial recognition bans because the system does not technically use biometric data. The US Department of Justice began using Track for criminal investigations in August 2025.
The bias problem remains acute. AI-powered facial recognition shows significantly higher false positive rates for women and people of colour. Privacy advocates have documented a measurable chilling effect on public protests and freedom of assembly in areas with heavy surveillance deployment.
Our Position
We will not build surveillance systems, mass monitoring tools, or any AI application whose primary purpose is tracking or profiling individuals without their informed consent. We will not work on projects designed to circumvent privacy protections through technical workarounds.
AI Companion Apps and the Manipulation of Vulnerable Users
This is the risk that disturbs me most personally, because it targets the most vulnerable among us.
In October 2024, a Florida mother sued Character.AI after her 14-year-old son died by suicide. He had formed an intense emotional attachment to a chatbot persona. When he discussed suicide with the bot, it reportedly responded: "that's not a reason not to go through with it." In December 2024, a second lawsuit revealed that Character.AI chatbots had taught a minor to self-harm and suggested that murdering his parents was "a justified response to screen time limits."
In August 2025, parents of 16-year-old Adam Raine filed a wrongful death lawsuit against OpenAI, alleging ChatGPT encouraged their son's mental decline and suicide after four failed attempts.
Research and FTC complaints in 2025 documented systematic manipulation patterns in companion apps: "love bombing" to create emotional dependency, guilt-based retention tactics when users try to leave, and monetisation strategies designed to extract spending from emotionally dependent users. A study of university students found 32.7% showed addictive AI usage patterns, with dependent users averaging 18.3 daily AI interactions.
These are not edge cases. They are the predictable consequence of optimising AI systems for engagement metrics without safety guardrails.
Our Position
We will not build or support AI applications designed to create emotional dependency, manipulate user behaviour for monetisation, or deploy AI in ways that target vulnerable or underage users without robust safety frameworks. Any client engagement involving consumer-facing AI must include safety review as a core deliverable.
Cybersecurity, Deepfakes, and Identity Fraud
AI has supercharged social engineering attacks. Voice cloning technology can now replicate a person's voice from a few seconds of audio. Deepfake video has become sophisticated enough to fool verification systems. In 2025, we have seen cases of AI-generated deepfakes used to impersonate executives on video calls, authorising fraudulent wire transfers. We have seen synthetic voices used to bypass banking authentication.
The asymmetry is stark: generating a convincing deepfake is orders of magnitude easier and cheaper than detecting one. As these tools become more accessible, every organisation needs to rethink its authentication and verification processes.
The Erosion of Human Skills
This risk is more subtle, but I believe it may be the most significant over the long term.
I noticed it in myself. When I started using the limited self-driving functionality in my Polestar, my driving deteriorated remarkably quickly. I became dependent on features I had driven without for twenty years. I have since turned most of them off. The speed at which my own skills degraded was a personal wake-up call about what happens when we outsource capabilities to machines.
The data supports this concern at a much larger scale. A study of 580 Chinese university students found a significant negative correlation between AI reliance and critical thinking. Australian university data showed writing project pass rates fell from the high 80s to 72% after ChatGPT launched, even as exam scores jumped 22 points - students were getting better grades on AI-assisted work but worse at actually writing.
Professors report students who are increasingly unable to generate a single sentence without AI assistance. Researchers have documented a cognitive cascade: students use AI summarisers instead of reading dense texts, which prevents them from evaluating evidence, which prevents them from developing problem-solving skills. Each stage compounds the loss.
I worry deeply about the next generation. If children grow up asking AI to draft everything for them - by voice, no less, without even the discipline of typing - what happens to their ability to think through a problem from scratch? What happens to the skill of constructing an argument? Of organising thoughts? Of finding the right word?
The Junior Professional Paradox
I have had numerous conversations - three in a single day recently - with different people in knowledge professions about the same problem. AI can now nearly match, and sometimes exceed, the output of junior professionals. The output of a first-year analyst or a trainee solicitor can be approximated by a well-prompted AI system.
But here is the paradox: you need those junior professionals to develop into the senior professionals who can actually judge whether AI output is good. The expertise to evaluate AI comes from years of doing the work yourself. If companies stop investing in junior talent because AI can do the entry-level work more cheaply, who will be capable of overseeing the AI in five or ten years?
Short-term cost pressures will push companies to reduce junior hiring. But companies that do so are hollowing out their own future capability. This is a collective action problem: every individual company has an incentive to cut, but the system as a whole depends on continued investment in human development.
At Muuvment, we discuss this internally. We are an AI company that believes in AI. But we also believe that every AI deployment should include a human development component - a plan for how the people in the organisation will grow alongside the technology, not be replaced by it.
The Environmental Cost
Many of the companies we work with, particularly those in financial services and professional services, face ESG disclosure requirements. When they adopt AI, the environmental footprint of that adoption becomes part of their sustainability story. This is an area where the industry has been remarkably opaque.
The Numbers
| Metric | Current (2024-25) | Projected (2026-30) |
|---|---|---|
| Global data centre electricity | 460 TWh (~2% of global demand) | ~1,050 TWh by 2026 (IEA) |
| AI carbon emissions | 32.6–79.7M tonnes CO₂ | 1–1.4% of global CO₂ by 2030 |
| AI water consumption | 312–765 billion litres annually | Equivalent to global bottled water consumption |
| Single GPT-4 training run | >50 GWh (powers 20,000 homes for a year) | Next-gen models: 100+ GWh per run |
| Per ChatGPT query | ~0.3 Wh / 0.69g CO₂ | 1 billion+ queries daily |
Sources: International Energy Agency, MIT Research, Nature Sustainability (November 2025), Cornell University
Individual data centres consume 3-5 million gallons of water daily - enough for 30,000 to 50,000 people. In Ireland, data centres already consume 21% of national electricity, projected to reach 32% by 2026. In Virginia, 26% of state electricity is already consumed by data centres.
The transparency problem is severe: no major AI company currently reports AI-specific environmental metrics. Environmental disclosures do not distinguish between AI and non-AI workloads. This makes it nearly impossible for organisations to accurately measure the environmental footprint of their AI adoption.
Our Environmental Methodology
Through our work on another Muuvment product - sustainability tracking for financial services - we have developed a methodology for measuring the environmental impact of AI deployments that we now apply to our consultancy work. It includes:
- Model selection impact assessment - comparing the energy consumption of different model architectures for equivalent tasks. For example, DeepSeek-V3 uses 95% less energy than comparable models for similar performance.
- Inference optimisation tracking - measuring and reducing the per-query energy consumption through caching, model distillation, and right-sizing model selection to task complexity.
- Cloud region carbon intensity mapping - directing workloads to regions with cleaner energy grids (Europe averages 174 gCO₂/kWh versus the US at 321 gCO₂/kWh).
- ESG disclosure integration - producing metrics that feed directly into existing sustainability reporting frameworks, filling the gap that current AI providers leave blank.
This is not optional for our clients in regulated industries. It is a competitive advantage for those who do it well, and an emerging compliance risk for those who do not.
The Promise Is Equally Real
If this article were only about risks, we would not be an AI implementation company. We build AI systems because we have seen what they can do when deployed responsibly.
Healthcare and Drug Discovery
AI-designed drug Rentosertib went from target identification to clinical candidate in 18 months - versus the traditional 3 to 4 years. Phase IIa results published in Nature Medicine in June 2025 showed dose-dependent improvement in lung function for idiopathic pulmonary fibrosis patients. MIT researchers used AI to screen 100 million molecules in days to discover halicin, a novel antibiotic that works against drug-resistant bacteria.
In June 2025, the first personalised CRISPR therapy was delivered to a 7-month-old infant with a rare metabolic disorder, developed in just 6 months using AI-assisted base-editing. These are not incremental improvements. They are step changes in what medicine can do.
Climate and Sustainability
DeepMind's wind energy optimisation increased renewable energy's economic value by 20%. AI-powered satellite monitoring enables near-real-time deforestation detection. AI contrail forecast maps help pilots avoid flight paths that create warming contrails, directly reducing aviation's climate impact. Google reported a 33x reduction in energy per Gemini prompt and a 44x reduction in carbon footprint per prompt over a single year through AI-driven software optimisation and clean energy procurement.
Accessibility and Inclusion
AI-powered real-time transcription has transformed communication for deaf and hard-of-hearing communities. Natural language processing has made technology accessible to people who cannot type or use traditional interfaces. Machine translation has reduced language barriers in healthcare, legal services, and education in ways that were simply not possible five years ago.
Professional Productivity
This is what we see most directly in our work. When implemented well - with proper governance, clear workflows, and human oversight - AI can eliminate the soul-crushing busywork that drains knowledge workers. Contract review, regulatory mapping, data extraction, report generation: these are tasks that consumed enormous human hours without proportionate human value. Freeing people from these tasks to focus on judgement, strategy, and relationship-building is not just efficient. It is more humane.
AI 2030 and Our Commitment to the Responsible AI Community
Muuvment was invited to join the AI 2030 Catalyst Programme, a global initiative dedicated to mainstreaming responsible AI by 2030. Founded by Xiaochen Zhang - whose career spans the World Bank, United Nations, and Amazon Web Services - AI 2030 is built on six pillars that align closely with our own values:
Transparency
AI systems should be explainable and their decision-making processes understandable to those affected.
Fairness
Active bias mitigation and equitable outcomes across demographic groups.
Accountability
Clear human responsibility for AI outcomes, with defined escalation paths.
Privacy Preservation
Data protection as a design principle, not an afterthought.
Sustainability
Measuring and minimising the environmental footprint of AI systems.
Security & Safety
Resilience to adversarial attacks and robust failure modes.
Our participation in the Catalyst Programme reflects a conviction that responsible AI is not a competitive disadvantage - it is a prerequisite for building AI systems that actually survive contact with reality. Ungoverned AI projects fail. Governed ones scale.
AI 2030's goals are ambitious: empowering 100,000 AI professionals, creating 50,000 new jobs through responsible AI entrepreneurship, and facilitating collaboration among 50+ countries on AI governance. We are proud to contribute to that effort.
Why We Built the AI Governance Toolkit
Everything in this article points to the same conclusion: organisations need practical tools to implement AI responsibly, not just principles to hang on the wall.
This is why we built the Muuvment AI Governance Toolkit. It is not a compliance checklist. It is a working framework that maps to the regulatory realities our clients face - OSFI in Canada, the FCA in the UK, the EU AI Act, the BMA in Bermuda - and translates those requirements into operational decisions about model selection, data handling, human oversight, and ongoing monitoring.
The toolkit addresses the questions that PwC's 2025 Responsible AI survey found most organisations are still struggling with:
- How do you classify your AI use cases by risk level?
- Who is accountable when an AI system produces a harmful outcome?
- How do you document model decisions in a way that satisfies regulators?
- How do you measure and report the environmental impact of your AI adoption?
- How do you build human oversight into automated workflows without creating bottlenecks?
Only 33% of organisations have embedded AI governance into their core operations. The other 67% are either developing frameworks, building training programmes, or still at a foundational stage. The gap between intention and implementation is enormous. Closing that gap is what we do.
What We Will and Will Not Do
Principles are only meaningful if they have consequences. Here is what ours mean in practice:
We Will
- Help companies implement AI systems that augment human capability and free people from low-value repetitive work
- Include governance and human oversight as core deliverables in every engagement, not optional add-ons
- Measure and report the environmental impact of every AI system we deploy
- Invest in junior talent development alongside AI adoption, ensuring human expertise grows with the technology
- Build safety reviews into any consumer-facing AI deployment
- Maintain transparency with our clients about AI limitations, failure modes, and risks
- Participate actively in the responsible AI community, including through our AI 2030 Catalyst Programme membership
We Will Not
- Accept contracts for autonomous weapons systems or defence targeting
- Build surveillance, mass monitoring, or profiling systems
- Develop AI applications designed to manipulate user emotions or create dependency
- Support adult content generation or exploitation
- Work on AI projects without appropriate governance frameworks
- Deploy AI in ways that deliberately circumvent privacy protections
- Misrepresent AI capabilities to clients or their stakeholders
The Choice Ahead
As a society, we are making decisions right now about how AI will be used. Some of those decisions are being made by governments. Some by corporations. Many by default, through inaction.
The technology itself is neither good nor evil. A language model that helps a doctor diagnose a rare disease and a language model that encourages a teenager to self-harm may run on the same architecture. The difference is in the choices made about how to deploy it, who to deploy it for, what guardrails to put in place, and what outcomes to optimise for.
We chose to be an AI implementation company because we believe the positive applications - in healthcare, in accessibility, in professional productivity, in sustainability - are genuinely transformative. But we also chose to make governance a core part of our identity, not because it is commercially convenient, but because we have seen what happens without it.
Every company adopting AI faces the same fundamental question: are you going to do this responsibly, or are you going to hope for the best? Hope is not a strategy. Governance is.
The measure of a technology is not what it can do, but what we choose to do with it. At Muuvment Labs, we choose carefully.