The Evolution of Big Data Over the Years

The Evolution of Big Data Over the Years
The Evolution of Big Data Over the Years – The evolution of big data over the years, In 2005, the term “big data technology” was used, but the technology did not have a name at the time. Finally, in 2008, Roger Mougalas gave the technology a name and called it “big data.” According to Mougalas, big data is a collection of big data huh ng large data that at the time was almost impossible to manage and process by traditional business intelligence tools. In the same year, the big data platform Hadoop that could handle Big Data was launched. Hadoop was created based

On the other hand, in order to get a more in-depth understanding of the beginnings of big data and its timeline A comprehensive overview of the history of big data:

In point of fact, the earliest record of utilizing data to track and oversee corporate operations dates back to 7,000 years ago, when accounting was first introduced in Mesopotamia to record the growth of crops and livestock. The body of accounting knowledge is always being expanded,

1663 marked the beginning of the history of big data. 1663 marked the beginning of the history of big data. John Graunt collected and analyzed all of the data available regarding the factors that led to deaths in London. John’s goals were to obtain an understanding of the bubonic plague and to construct an early warning system for it while it was still active.

Natural and Political Observations Made on the Bills of Mortality is a book that he wrote that compiles his results from the first written record analysis of statistical data. This book offers a great deal of insight into the factors that contributed to people’s deaths in the seventeenth century. Because of his contributions to the field, John Graunt is often referred to as the “father of statistics.”

Era of 1887

Herman Hollerith devised a computer machine in 1887 that could read holes that had been cut in paper cards in order to organize the data from the census.

The Beginnings of Big Data in the Year 1937

1937 marked the beginning of the first significant data collection effort in the United States, which was initiated by the administration of Franklin D. Roosevelt. The Social Security Act was signed into law in 1937, and as a consequence, the government was required to keep track of the contributions made by 26 million citizens of the United States and more than 3 million employers. This led to the conception of the project. As a result, IBM was given the responsibility of finishing this significant project by developing a hole punch card machine.

1943 as a time period

1943 saw the debut of the first ever data processing machine, which had been created by the British for the purpose of decoding messages sent by Nazi forces during World War 2. This apparatus, which the British codenamed “Colossus,” was designed to search for patterns in messages that were intercepted by them. This machine can scan 5,000 characters per second, which means that it can cut down on processing time that previously took weeks and instead do it in only hours.

1952 as a Time Period

The National Security Agency (NSA) of the United States was established in 1952, and throughout the course of more than ten years, they have engaged the services of 12,000 cryptographers. This is due to the fact that the NSA had to deal with massive amounts of data during the cold war.

1965 as a Time Period

The federal government of the United States made the decision in 1965 to construct the world’s first data center for the purpose of storing more than 742 million tax returns and 175 million sets of fingerprints. This was accomplished by transferring all of these records onto magnetic computer tape, which had to be kept in a single location. Although it wasn’t long before the project was scrapped, it is generally agreed upon that this was the beginning of the era in which electronic data storage was used.

The history of large amounts of data goes back fairly far, to 1989. 1989 is a pivotal year in the development of the history of big data because it was in this year that the World Wide Web was eventually invented by a British computer scientist named Tim Berners-Lee. Using a technique known as hypertext, they intend to make the process of exchanging information more streamlined.

Era of 1995

In 1995, there was a lot of data in the globe as a result of the increasing number of devices that were connected to the internet. These devices could have been personal computers or IoT devices. In the same year, the first supercomputer was developed, which was capable of performing more work in one second than a calculator could accomplish in 30,000 years if it were run by a single human.

2005 as a Time Period

The term “Big Data” was initially used for the first time in 2005 by Roger Mougalas of O’Reilly Media. Big Data refers to huge data collections that are practically difficult to manage and process using typical business intelligence tools.

In addition, the year 2005 was the year when Yahoo! developed Hadoop, which was built on top of Google MapReduce. Its goal is to index the entirety of the World Wide Web, and open-source Hadoop is currently utilized by many businesses for the purpose of data processing on a massive scale.

In 2009

the world’s largest biometric database was created, which was a watershed moment in the history of big data.
The government of India made the decision in 2009 to do iris scans, fingerprint scans, and take photos of all 1.2 billion of its citizens. All of this information is kept in the most extensive biometric database in the world.

2010 Era Developments

At the Techonomy conference held at Lake Tahoe, California, in 2010, Eric Schmidt made the following statement: “there were 5 exabytes of information created by the entire world between the birth of civilization and 2003.”

Developments in 2011

According to the McKinsey report titled “The next frontier for innovation, competition, and productivity” that was published in 2011, it was predicted that by the year 2018, the United States will experience a shortage of between 140,000 and 190,000 data scientists as well as 1.5 million data managers.

The Open Compute Project was initiated by Facebook in the same year with the purpose of sharing specifications for more environmentally friendly data centers.

Big data platform development in 2013

Docker was initially released as open source operating system container software.

Data center development for Big Data in 2015

Google and Microsoft are at the forefront of a huge data center construction effort.

2017 was a turning point in China’s growth of cloud storage.

Alibaba is building a data center in China, and Huawei and Tencent are joining them.

2018 Developments

The data center that is now the world leader in terms of market share employs a network that is capable of transferring data at a rate of up to 400 Gigabytes per second.

The year 2020 marked the beginning of edge computing.

The concept of edge computing is only beginning to take shape and will fundamentally alter the function of the “cloud” in the most important parts of the economy.

In 2021

the transfer speed will be lightning fast. A 1000G network is utilized in the data center, and this network is capable of transferring data at a rate of up to 1000 Gigabytes per second.

Computing at the edge In 2025

there will be a rise in the number of data centers, which will also be moved closer to the devices in order to meet the demands of edge computing.