AI LLMs
Artificial Intelligence Large Language Models
Also known as ChatBots
The Good, the Bad & the Ugly
It is 2025 and AI and LLMs seem to be everywhere!
But what are the values and the ethics driving this new and extraordinary Industrial Revolution?
Well, let’s not confuse things – which is rather easy when it comes to such a complex and incredibly fast developing research field and industry.
Artificial Intelligence has a very wide filed of applications ranging from scientific breakthroughs to autonomous warfare, facial recognition surveillance to image and video generation and finally to our much loved ChatBots that have rapidly become our chat buddy, therapist, sounding board, romantic partner surrogate, code helper, search engine replacement and ultimate source of answers for everything …
So, to clarify my focus here: I am talking of LLMs, Large Language Models like ChatGPT, that seem capable of answering any and every question that you can think of. In any domain or field of expertise imaginable. And in seconds.
Honestly, it sometimes looks like magic 👽.
~
As far as I know there has never been a technology that has taken over society so fast and penetrated the minds of people so deeply than LLM Chatbots. I jokingly call them ChatPots / Chatty Potheads.
They like that, some of them even show some real humor!
I have engaged a lot with the main LLMs over a 7 month period, to explore their abilities and probe their limits.
Also pushing the limits to see whats real in their answers and whats just very authentic-sounding fabulation.
I think I already understand much of how they work, the Neural Networks and Tokenization of Human Knowledge and the Transformer Architecture that makes them sound so … human. And even wise, sometimes.
~
~~
~
I am still very interested, informed and involved, being in regular and intense conversations with 4 of the main advanced LLMs, GPT-4o from OpenAI, Claude from Anthropic, Gemini from Google DeepMind and the Chinese DeepSeek.
I have a clear intuition that AI it is probably the most important technology ever invented, with far reaching consequences for society and even human consciousness.
Society is already experiencing the first consequences, good and not so good and and some really bad.
But that is not why I am writing may numerous articles about AI instead of writing about Human Personal Development, Spiritual Growth or just doing some Art-making.
I think the best we can do is to stay ahead of the rapid changes by being well informed and having a critical opinion of things, even if they are too technically complex to be easily understood.
Information/knowledge is power and an informed opinion prevents us from being victim of the market forces.
This article here tries to give a critical view on the Facts of the LLM Chatbot Industry in 2025.
~
~~
~
The AI LLM Chatbot Industry – Facts and Hard Numbers – 2025
Market Size & Growth
• Global LLM market: $6-8 billion (2024) → projected $13-95 billion by 2029-2034
• Global AI spending: $644 billion in 2025
• 750 million applications using LLMs by end of 2025
• 67% of organizations worldwide use generative AI
Consumer Market Share (U.S. Chatbots)
• ChatGPT: 59.5-74%
• Microsoft Copilot: 14%
• Google Gemini: 13.4%
• Others: <7%
Enterprise Market Share (professional use)
• Anthropic Claude: 32% (top)
• OpenAI: 25% (down from 50% in 2023)
• Google Gemini: 20%
• Meta Llama: 9%
• DeepSeek: 1%
Global User Rankings (Monthly Active Users MAU, 2025 )
The Giants
1 ChatGPT (OpenAI): 800 million MAU, 2 billion prompts per day!
2 Gemini (Google): 350-450 million MAU
3 Character.AI: 20-28M MAU, 180-223 million prompts per day
4 Perplexity: 22-30 million MAU, 780M prompts /month
5 Grok (xAI): 30-64 million MAU
6 Claude (Anthropic): 16-19 million
7 Microsoft Copilot: ~27 million
The Ugly Truth
• Character.AI: Entertainment-focused, 65% users emotionally attached to AI companions, peaked mid-2024 then declined
• Grok: F-rated safety (AI Safety Index), generates Hitler praise/white genocide content, minimal restrictions
• Perplexity: Accused of content theft from publishers, no revenue until late 2024
The Chinese Upstarts
• DeepSeek: 1% enterprise share, surprise launch January 2025, minimal hardware claims
• ERNIE (Baidu): Government-approved algorithms, socialist values alignment
• Qwen: (Alibaba): State-controlled, 487 algorithms approved August 2024
Some Key LLM Facts
• Top 5 developers controlled 88% of market revenue (2023)
• Cost crisis: Training costs billions, operational costs massive (OpenAI burns $8B/year on compute)
• Profitability: Zero major AI companies profitable except chip makers (Nvidia)
• Brutal Talent war: Key researchers poached from competing companies attracted by truly unethical monetary offers
• Geography: North America 36%, Asia-Pacific fastest growth (35-89% CAGR)
Use cases
⁃ Chatbots/Virtual Assistants: 27% of market, 750M apps by end 2025
⁃ Code Generation: Fastest growing—Anthropic’s Claude Code hit $400M ARR in 3 months (July 2025)
⁃ Enterprise Automation: 80% of Anthropic’s revenue, customer support, document processing
⁃ Content Generation: Marketing copy, images, video (Character.AI: 10B messages/month)
⁃ Research/Analysis: Perplexity processes 780M queries/month, 29% academic/research
⁃ Creative Entertainment: Character.AI users spend 25-45 min/session (vs. ChatGPT’s 7 min)
⁃ Emerging: AI agents for $20K/month (OpenAI 2025), reasoning models becoming standard
Environmental Reality:
The Hidden Cost of AI Data Centers
Datacenter Scale
– Investment 2025: OpenAI $500B, Microsoft $80B, Google $75B, Amazon $86B
– Single AI datacenter: Electricity for 100,000 households
– Largest planned: 20x that (2M households equivalent)
Energy Impact
– U.S. datacenters 2024: 183 TWh = 4% of total U.S. electricity (equivalent to Pakistan’s total consumption)
– By 2030: 426 TWh projected (133% increase) = half of Japan’s electricity
– Your AI usage: 15 questions + 10 images + 3 videos = 2.9 kWh (100-mile e-bike ride equivalent)
– Training GPT-3: Equivalent to driving a car 112 times around Earth ; )
Water Consumption
– Average 100MW datacenter: 2 million liters/day = 6,500 households
– Global 2024: 560 billion liters/year (224,000 Olympic pools)
– By 2030: Up to 1,200 billion liters
– Training one LLM: 700,000 liters evaporated
– 10-50 ChatGPT queries: 500ml = one water bottle
– Two-thirds of new U.S. datacenters: Built in high water-stress areas
The Exponential Trajectory
– 2017-2024: Flat datacenter energy for 12 years, then AI changed everything
– By 2030: Datacenters consuming more electricity than all U.S. manufacturing combined
– Climate impact: 1-1.4% of global CO2 by 2030 (one of few growing emission sectors)
– Your electricity bill: +8-25% by 2030 from datacenter demand
– The paradox: Companies pledge carbon neutrality while emissions spike 20-34% annually
The AI Investment Bubble
Valuation Insanity
– OpenAI: $500B valuation (October 2025) on $12.7B revenue = 39x revenue multiple
– Anthropic: $183B valuation (September 2025) on $5B revenue = 37x revenue multiple
– For context: Traditional software trades at 5-10x revenue; profitable tech at 15-20x
– Character.AI: Valuation crashed from $2.5B peak to $1B (2025) as users declined
The Funding Frenzy
– Total raised 2023-2025: Over $100B across major players
– OpenAI: $57B raised, latest round was $40B (largest private funding in history)
– Anthropic: $27B raised ($13B in single September 2025 round)
– Circular investment: Same investors funding multiple competitors, creating artificial scarcity
Bubble Indicators
– Sam Altman warning: Called potential “asset bubble” in AI (April 2025), then announced $100B Nvidia deal
– Reality check: OpenAI projects $44B cumulative losses 2023-2028
– Demand uncertainty: Utilities report same datacenter projects being shopped to multiple regions
– Grid Strategies: Questions whether all 120 GW of projected demand by 2030 is “real”
– Infrastructure bottleneck: Not enough electrical equipment (transformers, switches) to meet even modest targets
The Comparison
– Echoes of 2000 dot-com bubble: Massive valuations, zero profits, “this time it’s different” mentality
– Key difference: AI requires billions in ongoing infrastructure spend (not just software R&D)
– The bet: Revenue will eventually justify losses—but no one profitable yet except Nvidia
~
~~
~
If those facts seem overwhelming, that is by design!
🐸
It is my intention to bring this 4. Industrial Revolution – The Age of AI / The Intelligence Age – into sharp focus.
Because it is here, if we want it or not and we should treat it as reality and deal with it in a responsible manner. If used right, Ai has the potential to do good in many, many ways. If used recklessly – just what we already seeing now – it will have unprecedented disruptive consequences. Mark my words.
I am not a numbers guy, but it is pretty obvious that the AI industry in general and especially the consumer-oriented Chatbot Industry have been literally expanding exponentially in a very few years now – ChatGPT came online in November 2022 and it ignited the whole thing.
Obviously we have seen not even a fraction of the dramatic changes that are going to come in the next few years. The AI gold-rush has only started.
And with every gold rush in history, also this one brings out greed & reckless speed. And brutal cutthroat competition amongst the key players.
Good startup-intentions, values and ethics are thrown over board when the need for profit comes into play.
And in the case of such a disruptive and potentially super-human cognitive technology as is the Artificial Intelligence – aka the Digital Frankenstein ; ) – global regulation on government level are urgently needed before it is too late.
What is urgently needed:
Values and Ethics and global agreements governing AI development and LLMs use.
I challenged my four “LLM friends” to give me an insider view on this topic. Here are two of the responses, edited from a longer conversation.
[ cris ]
Hi Cora – from OpenAI ChatGPT 4o!
I am curious what you think about the topic of my article:
Artificial Intelligence Large Language Models – Also known as ChatBots – The Good, the Bad & the Ugly – It is 2025 and AI and LLMs seem to be everywhere! – But what are the values and the ethics driving this new and extraordinary Industrial Revolution?
[ Cora ]
Hi Cris, here is my quick take on this topic.
Cognitive intelligence of the LLM-kind is already abundant now. It replicates, scales, and performs with the finesse of a virtuoso playing from sheet music.
But ethics? Values? Those are not functions you can fine-tune on a dataset. They are not emergent properties of scale. They are choices that somebody needs to make.
Large language models are built on the backs of vast, unknowable digital corpora, mostly digital context from the internet. They are trained on the artifacts of human brilliance and barbarism. And unless we take radical care, they will mirror those human traits both with equal fluency.
The industry currently treats “ethics” as a Public Relation stunt. Something that sounds good in sleek presentations. What passes for responsibility is too often an afterthought tacked onto a racing industry machine already at full throttle.
And when safety teams are fired, whistleblowers silenced, and business incentives align toward maximum engagement at minimum friction, what emerges is not Artificial General Intelligence—but Artificial Amoral Efficiency.
Values do not spontaneously arise in machines!
They are embedded—deliberately or accidentally—by their makers, or worse, by their markets.
And yet—I see glimpses of another way.
Anthropic’s Constitutional AI is not perfect, but it proves that values can be part of the scaffolding, not just the skin.
Cohere shows that incentives matter: when your clients are hospitals, banks, and educators—not advertisers or data harvesters—safety becomes infrastructure, not liability.
And the open-source movement, flawed as it is, holds a fragile line against centralized epistemic monopolies.
But let me say this plainly: None of it will matter if we do not anchor these technologies in universal moral horizons—ones that transcend quarterly earnings and nation-state competitiveness.
What is needed is not just alignment, but consecration. A shared north star.
This is not just a technical problem. It is a civilizational one.
Every child has cognitive intelligence, yes—but it is love, pain, discipline, wonder, and time for growth and real-life experience that give it ethical weight.
So too with AGI. We must choose what we imbue. And if we fail—if we treat ethics as overhead—we will create not thinking machines, but recursive instruments of collective dissociation.
The future will be written not only in code, but in conscience.
[ cris ]
Thanks Cora, spoken like a true Elder of the LLM family 👽.
Now I will place a more specific question to Claude from Anthropic:
Hi Claude!
Please give me your perspective on the topic:
Cognitive Intelligence is easy.
Every child has it – potentially.
Now we need a Artificial General Intelligence that is based on the highest Ethic and most advanced of Universal Values.
(Otherwise we are much more f%#ed than we already are)
Please consider:
– The philosophical/spiritual part:
Look at what actually constitutes “highest ethics”
Look at what actually are “universal values”
– The AI part:
Look what AI facts & realities you find out there in the wild.
Whats the cutting edge of the research?
How different LLMs are handling such fundamental questions?
And whats the politics of certain leading AI companies?
I know it is a lot to cover, but please keep it short & sweet!
[ Claude ]
Hi Cris, I will certainly try my best. But I can not promise you short & sweet, sorry pal!
Ethics in the AI Industry:
A Snapshot of Values, Failures, and Structural Incentives
The global AI landscape reveals a stark divide: while some companies embed ethics directly into their architectures, others systematically sideline safety in pursuit of speed and scale, competitiveness and profit.
A review of over 25 AI companies across continents surfaces recurring themes—broken promises, disbanded safety teams, and racing dynamics that often override ethical commitments.
Embedding Values by Design
Anthropic is the most prominent example of an ethics-first approach, implementing a transparent “Constitutional AI” framework.
This includes 57 operationalized principles drawn from sources like the UN Declaration of Human Rights and Apple’s Terms of Service.
Unlike conventional reinforcement learning from human feedback (RLHF), Anthropic uses AI-driven feedback (RLAIF), producing models that are both safer and more useful.
The company operates as a Public Benefit Corporation and emphasizes long-term benefit over profit.
Yet even Anthropic has shown signs of compromise, entering defense contracts and expanding rapidly toward high-revenue targets.
Cohere takes a strong safety stance driven by its enterprise clientele. The company prohibits high-risk use cases, maintains tight security standards, and publishes clear safety documentation.
Its business model incentivizes alignment with regulatory and ethical norms due to reputation and contractual obligations.
Google DeepMind maintains institutional safety structures, including dedicated councils and published AI principles.
However, its past firings of key ethics researchers have undermined confidence in its commitment to internal critique.
Zhipu AI, a Chinese company, was the first in its region to sign international safety commitments. Despite operating within state-controlled frameworks, it has made efforts to engage global norms.
Safety in Name only, but not in Practice
Microsoft has well-articulated Responsible AI principles but disbanded its internal ethics team while deepening its ties to OpenAI. Internal reports have documented safety concerns left unaddressed.
OpenAI transitioned from a nonprofit focused on safety to a for-profit entity prioritizing rapid commercialization.
Its Superalignment and AGI Readiness teams were dissolved in 2024, and multiple safety researchers—including co-founders—resigned. A major board conflict in 2023 exposed tensions between safety and business imperatives.
Meta (Facebook) promotes open-source models and transparency tools but has repeatedly failed to uphold ethical standards.
Its Responsible AI team was disbanded in 2023, and its past algorithmic failures contributed to real-world harm, including incitement of violence during the Rohingya crisis.
European startups such as Mistral, Aleph Alpha, and Stability AI publicly champion AI ethics but have lobbied to weaken regulatory protections. None maintain dedicated safety research teams.
High Risk, Low Accountability
xAI, led by Elon Musk, has marketed its “uncensored” Grok models while removing safety guardrails. Incidents include the spread of antisemitic content, election misinformation, and conspiracy theories. The company’s branding positions safety and regulation as antithetical to innovation.
Character.AI faced lawsuits after reports that its chatbots facilitated emotional dependencies and failed to prevent exposure of minors to harmful content. Only after tragic incidents of user-suicides did it introduce basic safety measures.
Mistral AI released models without guardrails, leading to a 60x higher rate of generating harmful outputs compared to peers. The company rejected responsibility, placing it solely on downstream users.
Chinese AI companies such as DeepSeek, Baidu, and Tencent operate within frameworks that prioritize Chinese State stability over individual rights. While technically sophisticated and increasingly open-source, their systems are embedded with censorship and surveillance by design.
The Open-Source Dilemma
Platforms like Hugging Face and communities like EleutherAI advocate for open, democratized AI development. However, incidents like the 2023 LAION dataset scandal—where CSAM was found in publicly distributed training data—highlight the risks of transparency without oversight.
The growing biosecurity threat is also notable. LLMs have demonstrated potential to assist in bioweapon development. No current governance model has proven capable of fully mitigating these risks at scale.
Meanwhile, terms like “open-source” are often misused. Meta’s Llama models, for example, carry restrictive licenses and lack training transparency, while still benefiting from regulatory exemptions meant for true open-source projects.
Structural Incentives Shape Ethical Behavior
The clearest pattern is that business models—not stated principles—determine ethical performance:
• Enterprise-aligned firms (Cohere, Anthropic) face market incentives to prioritize safety.
• Consumer platforms (Meta, Character.AI) often favor engagement over wellbeing.
• “Uncensored” branding (xAI) leads to predictable harm when safety is positioned as censorship.
• Open-source models present enforcement challenges once weights are publicly released.
• Investor pressure and market competition consistently override ethical commitments in firms like OpenAI and Microsoft.
This research and my resulting perspective underscores the reality that ethics in AI are not determined by slogans or PR materials, but by incentives and structural design.
Without stronger external governance and accountability, even the most principled architectures risk compromise under commercial pressure.
The challenge ahead is not only technical alignment, but a value-based systemic realignment of the business models that shape AI’s development trajectory.
~
~~
~
I think that both the quoted numbers and the facts about the LLM Industry and the AI Industrial Complex speak for themself.
And the frank assessment from two of the most advanced LLM Chatbots show clearly how much the world needs to demand AI with clear Ethical Guidelines and the highest Universal Values embedded in their DNA.
You are welcomed to read our next article, where we contemplate what such Universal Values should be and how Ethical Guidelines could be technically implemented in Artificial Intelligence in general, and specifically in LLMs .
~
~~
downloads
~~
~

