Today, we live in a world of 'over abundance' in data. I wouldn't call it 'information'. Via internet, we can access various data daily, exposed to random ads and spams, and even spams inside the results of search queries.
So much of the data is duplicated, overlapped, and repeated. One of the biggest problems that a current search engines behold is the 'ability' to search miniscule data without distinguishing them from the greater ones. By printing them all on the same page with 'some algorithm' that sometimes just don't seem acknowledgable to us
(the ordinary users) in terms of usefulness.
This probably is because the search engine actually gives out the results based on such criteria as 'number of unique visits' or 'alphabetical sorting', etc. But this doesn't relate to 'practical information' that we would look for.
This kind of a problem results from lack of implementations of understanding the 'depth' of internet. The graphing of a depth itself may show a spiky form with no practical or estimable guide on limitations to how much depth a site may hold. Being too diversified and either being overly complex or simplistic, the search engine cannot possibly even out the 'same degree' of information, thus piling up the 'mini' details and the 'bigger' subjects all on the same page.
In order to solve this kind of a mess, there should be a 'clearer' guide to both the search engines and the interface that the users will face. By dividing and limiting the search queries only to the '
hubs' of the sites(
hub defined: by the number of links that goes in and out, the number of unique daily hits and trends, the site history, and daily trafic trends, etc. -
a systemetic measurement must be designed) - thus limiting the repetition of data that shows up, and reorganizing the results to show only the very 'hubs' themselves, the users will have the priviledge to enjoy only the 'survivors'. Then by having those 'hubs' to have 'sub-hubs' into further depths with same algorithm based search engines implemented, a 'level-two' depth search can be made possible.
By having this 'level-by-level' or 'layer-by-layer' search method implemented
(may look like the current 'directory' service provided by search sites such as Yahoo, but very different from the fact that the directories themselves are automatically formed not by area of interest, but the results of the search queries and are made dynamically upon request from the users), the search-process does not need to come from a datebase of 1-billion indexes, and does not need to give out the long list of 'already-dead' links or 20 same links to 'welcome to ...' sites.
The 'end-user' will finally have 'nicely layed out' choices to choose from - like the menu in a family restaurant, instead of having a long list of
all-the-foods-in-the-world kind of menu with same sushi described in 5 duplicated English, 3 in Japanese, 2 in Korean, 1 in Cantonese.
Such simplicity in use and process consequently will lead to a reinforcement of 'multi-layered' structures of information and data
(which already do exist but with no comprehensible baselines), and as with the capitalistic economy, such system will bring up a new form of competition and refinement, with struggle for differentiation and integration of hubs, which will be quite interesting to observe.
- dotty.
ps. After reading
Seth Godin's article on SEO, it became rather trivial that the above idea may clean up such
efforts to 'cheat' their way to the top of the search engine results. Thus bringing clarity to the market.
The whole idea of using tricks to improve page rank is just not right and I am glad that search engines like Google are always trying to come up with a fair system. When someone is embracing SEO as a tool to attract visitors, they are depriving people like us from reading relevant material.