by Julius WiedemannFeb 02, 2021
There are five ways to organise information. The man who formulated the best theory on that is probably Richard Saul Wurman, the inventor of the TED conferences, which he sold after almost a decade after its inception. The theory he developed was named LATCH, standing for Location, Alphabetical, Time, Category, and Hierarchy. Everything we look at that is organised is displayed in one of these ways, or in a combination of them. Otherwise, it is random, and probably not thought-through. The work of cataloguing materials, publications, and millions of other things is not new. In 1550, thanks to Gutenberg’s revolution, some three million books were printed in Western Europe, more than the entire amount of manuscripts produced in the 14th century. The entire number of manuscripts produced until the 15th century is estimated to be about five million. From the early 16th century to today, we went from nothing to about 2,500 different book titles per million inhabitants in the region. Here, we are talking only about books. But today, most of this material is available online, one way or another, in different databases, in consolidated ones, and a lot of them for free. It is easy to underestimate the value and applicability of that after taking it for granted after just a couple of decades. But it is certainly one of the biggest revolutions of the history of our species.
What is new about most of this personal and public avalanche of information is that finding most of it has been democratised. Now nearly everyone is able to access nearly all information about the knowledge humankind has created so far. Try to imagine the scale of it if you add accounting and fiscal information, addresses, films, photos, sounds, geological, academic papers, newspapers, magazines, medical, the observed space by research institutions, letters, emails, text messages, etc. To make sense of it, we need to be able to connect some dots, and in order to make connections, we need to search for the right pieces of information to find patterns and coherence.
The first ideas for search engines date back to 1945, expressed in an article titled, As we may think, published at The Atlantic Monthly. It took a few decades for Archie, the alleged first search engine to be launched. It was created by Alan Emtage, Bill Heelan, and J Peter Deutsch from McGill University in Canada. Then, there was Google and along with it, other trials to instrumentalise search, with Yahoo, and more recently Microsoft’s Bing being one of the most successful and enduring ones. Google decided first for relevance, another name for hierarchy for that sake. Today, Google seems to prioritise ads on top, which means the incorporation of a category-first strategy. Monetisation comes with a price. This way, the company can keep tweaking its algorithms on relevance, whilst maintaining its position in advertising sales. Google currently performs about two trillion searches a day, meaning about 700 searches per day per person.
We do not have search engines in that sense anymore. We have “become search machines” and we “think search”. For this revolution to happen we needed just a few years of the World Wide Web and the colossal amount of information, which makes us feel powerless to retain so much information. With ‘distributed cognition’, our brains have outsourced to artificial, or rather, externalised search mechanisms to find information. It ranges from documents in the cloud to words in documents in a hard disk, from places we want to go to restaurant reservations, from names in a WhatsApp feed to old emails, and more recently, more automated systems that anticipate needs by searching inventories, tasks not accomplished, and others. This scope of searchability sounds like an exaggeration, but we know at least today that we know very little, just by stumbling over things. We have been obliged to be very humble about what we know, simply because we are sure now we know very little.
The world of Siri and Alexa is not new as search itself. It is just an added feature to perform searches in a more amicable way. The next more amicable way might be the incorporation of semantic performance. It will come, for more complex issues, where the number of variables is larger and more nuanced. The learning capabilities of these mechanisms are a key point to their development, meaning artificial intelligence aspects will need to be employed for better performance. The algorithms of the future will predict far better and far deeper. Whether we want that, remains to be seen, because anonymity is becoming one of the most valuable luxuries.
We should be arriving at a point in time when we could be discussing only ideas, because facts are found abundantly. Reality, however, is proving itself more challenging, with too many sources acquiring similar status and credibility with completely different outcomes. But we have definitely changed the dynamics of many debates. Search, as an integral part of what we do and how we navigate in the world can create a positive impact on how we understand or limitations. No search is perfect, and confirmation bias is going to be embedded one way or another. But if we know it exists, we can even have serendipity added when trying to find new information.
Read more from the series Digital Legacies where our columnist Julius Wiedemann investigates the many aspects of digital life.