AI: Difference between revisions
| Line 34: | Line 34: | ||
'''Ponzi Scheme:''' Insert the Bloomberg chart that shows circular funding loops. Explain the issue. | '''Ponzi Scheme:''' Insert the Bloomberg chart that shows circular funding loops. Explain the issue. | ||
''' | |||
Data centers:''' Costs, funding, NIMBY, natural resources, impact on consumer electricity rates. | '''Data centers:''' Costs, funding, NIMBY, natural resources, impact on consumer electricity rates. | ||
''' | |||
Intellectual property:''' Rebuttal to the transformative work argument. | '''Intellectual property:''' Rebuttal to the transformative work argument. | ||
'''Consumer sentiment:''' Majors have tried to shove AI down the collective throats of millions of throats. Tilly Whoever. "AI is Fake" meme. Total dissolution of privacy. Potential AI toilet that was shown at CES 2026. Good Lord. Reactions to grotesque AI ads. Reactions to AI in browser. | '''Consumer sentiment:''' Majors have tried to shove AI down the collective throats of millions of throats. Tilly Whoever. "AI is Fake" meme. Total dissolution of privacy. Potential AI toilet that was shown at CES 2026. Good Lord. Reactions to grotesque AI ads. Reactions to AI in browser. | ||
Revision as of 02:06, 10 January 2026
AI is Fake For Goodness Sake
This will be an article about the LLM and GenAI types of "AI". These are the take-aways up front:
AI is here to stay. It'll play a role in your life in the future. However, as of 2026, much of the AI information that's being promoted or pushed by businesses is nonsense.
The heart of the current AI mania is the idea that business profits are going to skyrocket because businesses are going to be able to replace people with software. This is largely about two types of software: LLM and GenAI. Neither is actually AI or "artificial intelligence". Both have serious limitations.
All Hail AI Scale
The AI mania was born in part due to the belief that the power of AI software was going to "scale". "scale" meant that adding hardware would improve AI performance proportionately and that this would continue far enough to produce human-equivalent AIs or "AGIs". Note: AGI is short for Advanced General Intelligence.
In fact, though scaling worked early on, the results have declined over time. There is no reason to believe that scaling from current levels will produce massive improvements in AI. Further, AGI based solely on the two current focus areas on LLM and GenAI isn't even *possible*. LLM and GenAI are quite similar to parrots though on a grand scale. They're not going to end up as AGI based solely on hardware no matter how much hardware is added.
I asked an LLM about this. The LLM responded as follows: "Critics argue that LLMs are primarily pattern matchers and interpolators of their training data, lacking comprehension or the ability to generalize robustly outside their training. However, a significant portion of the AI research community believes that LLM and GenAI, with architectural changes, could be stepping stones towards AGI."
I'll accept that last part. Someday, software that started out as LLM and GenAI might be turn into something very different. But LLMs and GenAIs are in use now without, in most cases, a solid business case or even the ability to operate reliably, let alone the types of raptorous claims that are being made.
In short, incredible amounts of money and/or natural resources are being poured into a technology that doesn't work as advertised on the corporate side and that has no business model yet on the consumer side.
Hallucinate Is Not Great
Both LLMs and GenAIs hallucinate. This means, make sh*t up at random. The leads not only to minor errors but to extra arms and legs in GenAI illustrations or cases where LLMs instruct children to kill themselves.
Fast Food chains that have used LLMs to supplement cashiers have found that the LLMs do creative things with orders. Customers are not pleased. Law firms that have used LLMs to replace paralegals have found that the LLMs invent random case citations. That has gone poorly for the law firms in Court.
I asked an LLM about this. The LLM defended itself as follows: "While the output might appear random or nonsensical to a human, it's not truly random in a statistical sense." That isn't really much of a defense.
The hallucination problem is fundamental to LLM and GenAI. Fine tuning of different types can be done. Filters can be added to output. Those are bandages. The bottom line is that the hallucination problem can't be fixed. There is no solution.
The LLM quoted above agrees: "This is a widely held view among researchers and developers. Hallucination is not merely a bug but an inherent characteristic stemming from the way these models are designed and trained. This unreliability directly impacts the ability to build solid business cases, as the cost of human oversight and correction can negate potential savings or benefits."
Other Topics to Add
Ponzi Scheme: Insert the Bloomberg chart that shows circular funding loops. Explain the issue.
Data centers: Costs, funding, NIMBY, natural resources, impact on consumer electricity rates.
Intellectual property: Rebuttal to the transformative work argument.
Consumer sentiment: Majors have tried to shove AI down the collective throats of millions of throats. Tilly Whoever. "AI is Fake" meme. Total dissolution of privacy. Potential AI toilet that was shown at CES 2026. Good Lord. Reactions to grotesque AI ads. Reactions to AI in browser.
How thrilled are people supposed to be about software whose primary intent is to take away their jobs?
Much of what a consumer might need to do with AI could be done offline without paying anybody a penny. What is the response to that?
About this website
This is a MediaWiki website. For MediaWiki details, click here. Contact information for the website is at this link. Site Notices are at this link.
(end of page)

