you are viewing a single comment's thread.

view the rest of the comments →

[–]Canbot 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (3 children)

Building a search engine is not as simple as pinging every ip and cataloging it. The software and algorithms used to make it work need a developer. That is exactly what AI does best. Local search would require a massive database.

An open source LLM can already ping random ip's, analyze the page in a very sophisticated way, and do just about anything based on the results. Anyone can set it up with very minimal understanding of computers or software, and run it on easily affordable hardware. That has never been remotely possible before.

The vast majority of people are indeed too lazy and stupid to ever bother doing it when they don't even understand that Google is mind fucking them by controlling everything they get to see. But the tools will be built by the exceptions, then get talked about on tech blogs and spread by influencers. It will be so easy to use even the morons will pick it up if only to feel like they are part of the smart crowd.

[–]binaryblob 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

I am looking for an answer based on the economics of the operation. You are not making any sense. A LLM would still need an index for high performance, because otherwise you would have a search engine that is behind all the time.

If I were to stand on the street with a free computer with a Google interface builtin (but not depending on Google), people would still not want to have it, because "they can just use Google".

It was possible at least a decade ago already to run a distributed search engine node locally and even share resources for free. I know, because I did. I would say the key enabling technology for fast big indices is fast and large SSDs.

The thing is that even if it is relatively cheap, the utilization of the hardware would be so low that it would be an economic waste for the 99.5% of the time you aren't using the search engine (Google just serves another customer).

I think you are severely overestimating human intelligence; unless it's built into Windows it's not going to happen at a mass scale (and I am saying that as a Linux user). Surely, perhaps a million people will do it, but we have billions of people.

If there is a killer app, then perhaps it will happen.

[–]Canbot 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (1 child)

I think the big disconnect is that you think an AI search engine will work like a traditional search engine. Forget that completely. Imagine you have an assistant and you ask them to research something for you. They are not going to reach into their briefcase and pull out the research. They are going to go do the research.

An AI search engine will be an agent that is optimized for searching the web, overcoming censorship, evading subversion and manipulation and curating the results so instead of getting 50 garbage results from google that you have to parse through yourself and never get what you really want because it is wrong think you instead get 5 results of exactly what you want, and a professional quality summary of it all.

[–]binaryblob 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Google uses neural networks in their ranking function since a long time. You can't go and search the entire Internet for every single query, because it would cost $10000/query (or whatever big number it is). So, you still need an index somewhere. An agent would still take the action "find in some index" and that index needs to be created and paid for by someone. In your world, everyone has their own index or there is some shared index (that could work, but is very much not popular). There are all kinds of reasons why this is not going to work, but I asked you how it is going to work and you have not said anything specific. It's almost as if you are a LLM.