A person who has all the knowledge will have to know what a terabyte is, put a plug in, how snails reproduce, and, for example, how to write Lenin in Chinese. Everything is a lot.
It would be necessary to start by quantifying how much human knowledge is. There are those who say that Aristotle was the last living person who gathered all the knowledge, and who assure that never in the two million years of evolution of the human being has anyone been able to know everything, not even at the beginning of time.
Chris Stringer , a paleoanthropologist at the Natural History Museum in London, puts it this way: “Given the diverse environments in which humans lived even before they left Africa, I doubt that one human being could have maintained all the information necessary to survive throughout the entire world.” the human range.
As our body of knowledge grew, it eventually outgrew a brain’s capacity to hold it.
Since ancient times there have been attempts to compile all human knowledge in great universal libraries such as the Library of Alexandria, and the encyclopedism and macropaedia of the 17th and 18th centuries were the first real attempts to organize and collect all knowledge.
The 15th edition of the Encyclopaedia Britannica (the Britannica ) made it its goal to systematize all human knowledge
The English Britannica is the oldest still in print (although it is no longer printed on paper) and its size has roughly remained constant over the last seventy years, at about forty million words in 500,000 topics. In 1974, the 15th edition adopted a third goal: to systematize all human knowledge.
There was a time when reading the Britannica in its entirety was a challenge for the world’s cultural elite. And it was established that to read it in its entirety it took between three and twenty-two years. When Fat’h Ali became the Shah of Persia in 1797, he was given a complete set of the third edition of the Britannica , which he read in three years.
The writer George Bernard Shaw claimed to have read the entire ninth edition, except for the scientific articles. The explorers also made use of it. Philip Beaver , an officer on the Royal Navi in the 18th century, read it during a sea expedition, and American Admiral Richard Evelyn Byrd took the Britannica as reading material during his five months at the South Pole in 1934.
Reading the Britannica had a huge reputation, because it summed up all human knowledge. No one today considers reading all the information contained on the Internet, no matter how many years the stay at the Polo is.
KNOWLEDGE GROWS AT AN UNATTAINABLE RATE
Buckminster Fuller, a strange American character, designer and futurologist, created the “Knowledge Duplication Curve”, which is useful for this game.
Fuller said that if we take as a basis the knowledge generated until the year 1 AD, it took humanity 1500 years to duplicate it (go from 1 to 2), the next duplication (go from 2 to 4) took 250 years, and so on. In the year 1900 humanity had produced 8 times more knowledge than in the year 1, by 1945 knowledge doubled every 25 years, in 1975 every 12 years, and it is currently estimated that it doubles every 2 years.
With this, it is possible to think that although each passing day we know a little more, in relative terms, we know less and less of the total existing knowledge.
According to IBM, the construction of the “Internet of things” will lead to a doubling of knowledge every 12 hours
Today things are not so simple. According to experts, different types of knowledge have different growth rates. For example, nanotechnology knowledge doubles every two years and clinical knowledge doubles every 18 months. But, on average, human knowledge doubles every 13 months. According to IBM, building the “Internet of Things” will lead to a doubling of knowledge every 12 hours.
In 2003, Peter Lyman and Hal R. Varian of UC Berkeley conducted a fascinating study, How Much Information, to examine how recorded information increases. They estimated that it grew by 30% each year from 1999 to 2002 and between one and two exabytes of new information (one billion gigabytes or 1018 bytes) were produced each year, 60% of which was digital information.
As a curious fact, without forgetting that the study is from 2003, Lyman and Varian found that the World Wide Web contains around 170 terabytes of information on its surface; in volume, this is seventeen times the size of the print collections of the US Library of Congress. Here you can read the interesting conclusions of that study.
In 2019, there were 40 times more bytes of data on the internet than there are stars in the observable universe
Annually, cloud software firm DOMO publishes a report on the amount of data collected every minute. The 2019 report, ‘ Data Never Sleeps 7.0 ‘, claims that there were 40 times more bytes of data than there are stars in the observable universe that year.
In DOMO’s 2018 chart, they indicate that “more than 2.5 quintillion bytes of data are created every day” and that it is estimated that each person on Earth generates 1.7 MB of data per second. We are 7.75 billion generating tons of data every second. Currently, it is estimated that the Internet has 5 million terabytes (TB) of information, of which Google has indexed approximately 200 TB, that is, only 0.004% of the information on the Internet can be found through Google.
HOW MUCH IS THE HUMAN BRAIN FOR?
Our brain has about 100,000 million neurons that establish between them at least one hundred billion connections. These connections are the support used to store information, both the reproduction of the snail and the name of Lenin in Chinese. From these data, experts in computational neuroscience calculate that the maximum information storage capacity of our thinking organ is between ten and one hundred terabytes.
The human brain would have the capacity to know 0.00002% of knowledge
We can make a calculation counting on those 100 terabytes of memory, or, what is the same, 100,000 gigabytes of memories , experiences, knowledge, etc.
If we have that figure of the 5 million terabytes of information on the internet, the human brain would have the capacity to know 0.00002% of the total knowledge, if we accept that everything is on the internet.