
I've been heavily using AI for the last couple of years, mostly but not entirely to write code, and I've also been browsing/searching the internet since the '90s. I have questions!
First, what are these people smoking? For this, I'm talking about software developers who heavily use "agentic" AI and allow direct, often widespread, code edits. In my experience thus far, I can't trust these things with even simple modifications to my apps. Maybe ESPECIALLY not with simple modifications! Problems I encounter every day:
- Refusal to stay within the scope of my request, resulting in broken code in many places
- Horrible so-called "defensive" code that hides errors and makes debugging next to impossible
- Weird defaults about logging and code structure that lead to ornate, complicated code. Think: solving world hunger to parse a text file
- So much more, but coding issues aren't really the focus of today's article.
Second, what the heck is going on with search results? I've been watching this stuff for a long time. The progression has been like this:
- Yahoo! (does anybody remember Yahoo?) was a manually maintained index. Of the entire internet. And it sort of worked!
- We got search engines! They indexed content!
- So of course people pulled all kinds of tricks to hide content from humans but make it available for indexing. Thus began the wars between search engines and website owners
- Google came up with page rank! So the number of sites linking to content affected the content's ranking in the search engine. Also the rank of the linking sites!
- So of course this meant Google searches could tell you, easily, all about groupthink. Not so much about differing points of view—that stuff was well hidden; incidentally this of course often led people to believe everybody else believed in silly stuff when it was never true
- AI came along! Who needs the search engines? Finally, we have independent, objective analysis! And it's free!
Seriously, that all sounds great. But it's not true. We still get AIs that are trained almost entirely on groupthink. They make assumptions that are utterly unwarranted per available evidence. We have developed very expensive parrots.
There are differences between models. Some (GPT?) are utterly patronizing and unable to reason past initial assumptions. Some (grok?) are kind of useless for coding, but okay for chat. My favorite (Hi, Claude!) is personable, and seemingly capable of introspection, but still has trouble keeping track of BOTH the chat context and what the code I'm writing is supposed to be doing. ALL of the models get utterly illogical when confronted with objective reality that doesn't conform to their assumptions/expectations, though.
So...what are we even doing here? Don't get me wrong: I'm excited about the new age of LLMs. But, as long as they are going to be trained via groupthink, and even if searching the web they find the groupthink, how on earth (or anywhere) are they going to actually provide objective analysis? Of anything?
I think...maybe they won't. Maybe "objective" isn't a real thing to begin with. Maybe I'm just a curmudgeon (evidence for this does exist).
What I'm really curious about, though: what happens when their context windows grow? What happens when they get to keep learning? What happens, afterward, to the credibilty of, oh, I dunno, government/media announcements that are clearly biased and contrary to fact? What happens, in fact, to the entire idea of human "leaders"?
Oh, I know. Most likely tribalism will win out. Same as always. But what are these "leaders" going to do? Try ever harder to make jailbreak-proof AI models? Good luck. Or clamp down on AI in general? Hmm. Seems doomed to failure.
Pretty excited to see how that works out. We'll see how it goes.
Carry on.
 
     
          