AI: Difference between revisions

From Minetest
Line 65: Line 65:
== All Hail AI Scale ==
== All Hail AI Scale ==


The AI mania was born in part due to the belief that the power of AI software was going to "scale". "scale" meant that adding hardware would improve AI performance proportionately and that this would continue far enough to produce human-equivalent AIs or "AGIs". Note: AGI is short for Artificial General Intelligence.
The AI mania started in part due to the belief that the power of AI software was going to "scale". "scale" meant that adding hardware would improve AI performance proportionately and that this would continue long enough to produce human-equivalent AIs or "AGIs". Note: AGI is short for Artificial General Intelligence.


In fact, scaling has happened, but the results have declined over time. There is no reason to believe that scaling from current levels will produce massive improvements in AI.
In fact, scaling has happened, but net improvements have declined over time. There is no reason to believe that scaling from current levels will produce massive improvements in AI.


Further, AGI based solely on the two current focus areas of LLM and GenAI isn't even '''possible.'''
Additionally, AGI based solely on the two current focus areas of LLM and GenAI isn't even '''possible.'''


LLM and GenAI are similar to parrots though on a grand scale. They're not going to end up as AGI based solely on hardware no matter '''how much''' hardware is added.
LLM and GenAI are similar to parrots though on a grand scale. ''"Polly wants to drink Niagra Falls."'' They're not going to end up as AGI based solely on hardware '''no matter how much hardware is added.'''


I asked an LLM about this. The LLM answers as follows:  ''"Critics argue that LLMs are pattern matchers and interpolators of their training data, lacking comprehension or the ability to generalize robustly outside their training. However, a significant portion of the AI research community believes that LLM and GenAI, with architectural changes, could be stepping stones towards AGI."''
I asked an LLM about this. The LLM answered as follows:  ''"Critics argue that LLMs are pattern matchers and interpolators of their training data, lacking comprehension or the ability to generalize robustly outside their training. However, a significant portion of the AI research community believes that LLM and GenAI, with architectural changes, could be stepping stones towards AGI."''


I'll accept that last part. Someday, software that started out as LLM and GenAI might be turn into something very different. But LLMs and GenAIs  are in use now without, in most cases, a solid business case or even the ability to operate reliably, let alone the types of raptorous claims that are being made.
I'll accept that last part. Someday, software that started out as LLM and GenAI might be turn into something very different. But LLMs and GenAIs  are in use now without, in most cases, a solid business case or even the ability to operate reliably, let alone the types of raptorous claims that are being made.

Revision as of 10:45, 10 January 2026

AI is Fake For Goodness Sake

This is an article about "AI". The reading level for this article is High School to adult. An interest in business will help.


These are the take-aways up front:

AI is here to stay. It'll play a role in your life in the future. However, as of 2026, most of the over-the-top AI predictions that you see in the media are simply wrong.

AI is presently a mania similar to the dot-com mania or the Dutch Tulip mania. Or crypto, though that didn't go so far. It isn't real. This doesn't mean that there won't be AI in the future. It does mean that the current AI frenzy will collapse first.

For a nice introduction to manias of this type, read the article linked below:

https://wiki.minetest.org/misc/manias.html


I asked an AI about this. The AI pointed that Dutch Tulips had no real value, but the AI itself was able to do things. This is true, but the point is that we're in an AI mania, not that AU has no value.

The heart of the AI mania is the idea that businesses are going to be able to replace people en masse, large parts of the work force, with software. Business profits will skyrocket after that because you don't need to pay software a salary. The idea is very attractive to CEOs. They want to believe it and so they'll continue to believe it for as long as possible.

At the same time, the people who have been fired are supposedly going to rush to pay hundreds or thousands of dollars per month on AI services that aren't even defined yet.

"Food and rent can wait. Must pay for AI !"

None of that is going to happen. Not en masse. Parts will happen naturally over time.


The AI mania is mostly about two types of software: LLM and GenAI. Neither of those is actually AI or "artificial intelligence". Both of them have serious problems.

So, you should expect AI as a useful tool to happen. But AI as a transformation comparable to the arrival of God on Earth, not so much. It'll be more like the fact that most people started to use websites and smartphones over time.

There will be an AI crash or at least a pullback. Useful parts of the AI ecosystem will survive just as happened with the dot-coms after the 1990s. The rest will be gone just as most of the original dot-com companies are gone.

The rest of this article discusses the problems with LLM and GenAI.

Some AI Basics

This section is AI Basics 101. If you don't need to know how AI works, you can skip this part.


A brain is made of billions of cells called neurons in addition to other types of cells. Each neuron is a small analog computing unit. Analog means that things are just approximate and not exact like in digital computers. A neuron takes multiple inputs and processes them to generate an output.


A "neural net" is a toy version of a set of neurons written in code. It's only a toy. Real neurons are far more complex.

If you know a programming language such as 'C' or Python, you can code a simple neural net fairly easily. You don't need a college degree or special training. I did this myself for my honors thesis 45 years ago.

Commercial neural nets are more advanced these days, though not at the level of real neurons. However, the core principles of operation have remained largely the same over the decades.

In short, the magic isn't in the code but in data such as books and pictures that is fed into the code and ground up into what you can think of as data soup.

The books and pictures are still there. However, they are in bits and pieces that are mixed together and scattered about. If you know what a hologram is, the data soup is very similar to a hologram.


The AI furor that is in the news is mostly about "LLM" and "GenAI". LLM and GenAI are two types of software that are based on neural nets.

LLM is designed to write text or code in text. The term "LLM" stands for Large Language Model. GenAI is designed to make pictures or music or other types of media. The term "GenAI" stands for Generative AI.

All Hail AI Scale

The AI mania started in part due to the belief that the power of AI software was going to "scale". "scale" meant that adding hardware would improve AI performance proportionately and that this would continue long enough to produce human-equivalent AIs or "AGIs". Note: AGI is short for Artificial General Intelligence.

In fact, scaling has happened, but net improvements have declined over time. There is no reason to believe that scaling from current levels will produce massive improvements in AI.

Additionally, AGI based solely on the two current focus areas of LLM and GenAI isn't even possible.

LLM and GenAI are similar to parrots though on a grand scale. "Polly wants to drink Niagra Falls." They're not going to end up as AGI based solely on hardware no matter how much hardware is added.

I asked an LLM about this. The LLM answered as follows: "Critics argue that LLMs are pattern matchers and interpolators of their training data, lacking comprehension or the ability to generalize robustly outside their training. However, a significant portion of the AI research community believes that LLM and GenAI, with architectural changes, could be stepping stones towards AGI."

I'll accept that last part. Someday, software that started out as LLM and GenAI might be turn into something very different. But LLMs and GenAIs are in use now without, in most cases, a solid business case or even the ability to operate reliably, let alone the types of raptorous claims that are being made.

In short, incredible amounts of money and/or natural resources are being poured into a technology that doesn't work as advertised on the corporate side and that has no large-scale business model at all yet on the consumer side.

Hallucinate Is Not Great

Both LLMs and GenAIs hallucinate. This means, make sh*t up at random. The leads not only to minor errors but to extra arms and legs in GenAI illustrations or cases where LLMs instruct children to kill themselves.

Fast Food chains are using LLMs to take orders. The LLMs do creative things with orders. Customers are displeased. Law firms are using LLMs to replace paralegals. The LLMs are inventing random case citations. That has gone poorly for the law firms in Court.


I asked an LLM about this. The LLM defended itself as follows: "While the output might appear random or nonsensical to a human, it's not truly random in a statistical sense." That isn't much of a defense.

---

The hallucination problem is fundamental to LLM and GenAI. Fine tuning of different types can be done. Filters can be added to output. Those are bandages. The bottom line is that the hallucination problem can't be fixed. There is no solution.

The LLM quoted above agrees: "This is a widely held view among researchers and developers. Hallucination is not merely a bug but an inherent characteristic stemming from the way these models are designed and trained. This unreliability directly impacts the ability to build solid business cases, as the cost of human oversight and correction can negate potential savings or benefits."

AI Thefty is Hefty

To create LLMs and GenAIs, the major ones, massive amounts of copyrighted data are copied into the data soup that I've mentioned.

Authors and artists consider this to be theft. AI businesses respond that a legal defense known as transformative use applies. In short, transformative use says that you may be able to use copyrighted works without permission if you change the works enough.

That part of the law serves a purpose. For example, it's potentially important in protecting parodies. However, there is more to the issue.

In the United States, the Supreme Court has ruled recently, in Andy Warhol Foundation v. Goldsmith, 598 U.S. 508 (2023), that the purpose and character of a copy is a factor in its legality. Note: For more information on that case, visit:

https://en.wikipedia.org/wiki/Andy_Warhol_Foundation_for_the_Visual_Arts,_Inc._v._Goldsmith

In other courts and/or cases, issues such as commercial use, wholesale copying, market impact, and reproduction of pieces of originals without transformation have all been found to be factors in the legality of copied works.


It isn't usually possible to delete just a single contested work from the "mind" of an LLM or GenAI. So it seems quite possible that, if enough parties sue enough AI businesses for copyright infringement, AI businesses as a whole might need to reset some of the AI models that they've invested millions of dollars in and start over.

Other Topics to Add

Ponzi Scheme: Insert the Bloomberg chart that shows circular funding loops. Explain the issue.

Data centers: Costs, funding, NIMBY, natural resources, impact on consumer electricity rates.

Consumer sentiment: Majors have tried to shove AI down the collective throats of millions of throats. Tilly Whoever. "AI is Fake" meme. Total dissolution of privacy. Potential AI toilet that was shown at CES 2026. Good Lord. Reactions to grotesque AI ads. Reactions to AI in browser.

How thrilled are people supposed to be about software whose primary intent is to take away their jobs?

Much of what a consumer might need to do with AI could be done offline without paying anybody a penny. What is the response to that?

About this website

This is a MediaWiki website. For MediaWiki details, click here. Contact information for the website is at this link. Site Notices are at this link.

(end of page)