Petroleum of the 21st Century: How to Preserve Data?

Security of data storage is a true stumbling block of today’s digital century. Fortunately, Casper API, and a number of other startups propose the solution.

In the beginning of the 20th century, mineral resources started to play the crucial role in the history of the mankind: when the properties of petroleum were discovered, the course of the social and economic development of the whole world changed forever. In the 21st century, data plays the same role as the ‘black gold’ played in the early 20th century.

Data, not fuel, is the base of a huge number of digital products and services, from the information collected by search engines that feeds the whole industry of digital marketing to the data from weather sensors that make it possible to continually update weather forecast for tomorrow. This is why a huge share of the new economy – the digital economy – is structured around the data processing and solving the issue of analyzing and preserving all this data.

Dangers Data Faces in the 21st Century

Being of commercial or reputational interest, data often becomes the target of numerous thefts through cyberattacks or gets compromised in another way. In a number of well-known cases, the malefactors, having stolen personal data, could access bank accounts of the victims as well as private information or documents containing commercial secrets.

It is believed that in most cases, it is up to the individual user or company to solve the issue of data safety – and the issue boils down to creating a secure password. Indeed, the digital hygiene practices of our days suggest that a password has to be created in all seriousness, invented based on personal associations and then changed every month. But is a weak password really the only reason that enables the malefactors to seize other peoples’ data?

Quite often, unauthorized use of somebody else’s data becomes possible due to a number of vulnerabilities that plague places for data storage. Such ‘weak links’ may include server failures, weak spots of data security programmes’ code, and the very fact that all data is stored in one place. How can we store data in the 21st century without worrying about its safety?

From Server Cables to Decentralized Communities

Currently, there are three main approaches to the storing of big digital data: use of local servers, of cloud servers and, finally, use of decentralized repositories.

Traditionally, information is preserved by end users on local facilities: personal computers, external hard drives, flash drives – in case of individual users; corporate servers, data processing centres – in case of large companies. Such an approach started to become obsolete as data increased in quantity: even ordinary users have to buy additional ‘memory’ for their computers as the number of vacation photos grows every year.

It was found to be cheaper and more comfortable to rent the storage capacities from the companies specializing in data storage: it costs less to rent room for data storage than to buy physical capacities necessary to store the same volume of data, and it is possible to access such stored data from any device in any place of the world using only a login and a password. Such data storage type became known as cloud storage. Top five providers of cloud storage in 2017 included the services provided by IT giants: Apple’s iCloud, Microsoft’s OneDrive, Google Drive, and Amazon Drive.

However, this method of storage turned out to be not the safest one. 1.9 billion records were compromised during just the first half of 2017, which is more than during the whole year of 2016 (1.37 billion). It means that approximately 10.4 million records leak to the Internet daily. To name just few examples of resounding data thefts from a cloud storage in 2014 – intimate photos of many Hollywood actresses, including Jennifer Lawrence and Kirsten Dunst – leaked from the iCloud storage to the World Wide Web.

Cloud storages are unsafe due to a combination of various factors. Firstly, data is stored on the servers of the providers and can therefore be compromised or simply lost because of internal failures, consequences of which users of the service cannot hope to compensate, because they have to rely on the provider company entirely.

Secondly, these apprehensions are bolstered by the fact that the servers used by cloud repositories, even if geographically decentralized, are still controlled by one decision-making centre that can unexpectedly change its practices and deliver user data to the special services if requested to do so. Finally, the very fact that there is a single centre that can be hit in order to wreck a great number of servers, attracts malefactors willing to compromise data of certain users.

The services of a new type, known as decentralized data repositories, plan to strengthen security of data storage by abandoning the centralized pattern. Their solutions rely on a network of independent providers of computing capacities linked together uniquely by the platform that monitors the location of their customers’ data but does not have any physical access neither to this data nor to the servers that host it. The architecture of such services is in most cases based on the blockchain, the technology of data recording and storage in a virtual ledger simultaneously stored on the computers of all the network participants.

Such services gradually start to make themselves known. In 2017, the teams of Storj, Filecoin, and Sia projects told about their plans to create decentralized data repositories. The market showed its interest: during the the initial coin offering, the Filecoin project managed to raise more than $257 million.

What has the Decentralization Brought to the World of Data Security?

As new projects emerge, the solutions that assure safe data storage in accordance with the decentralized pattern are amplified. For instance, Casper API, a project with Russian roots, offers an interesting solution. Similarly to other decentralized repositories, Casper API forms a distributed network of suppliers of unused capacities that can include both single users in possession of powerful hardware, and data centres that would like to use their computing capacities more efficiently. Two main requirements for those, who work with Casper API, are to be online 95% of time and to have a speed of at least 5Mbps.

To assure storing safety, Casper API first encrypts the information received for storage, then copies it, and stores in four replicas. Besides, the data is fragmented, therefore neither node of the network will have access to the entirety of information, meaning it would not be able to fully use it even if it manages to steal and decipher it.

The target audience of the project are decentralized services and applications relying on blockchain platforms, development of which is often hampered by the limited scalability of the blockchain systems they are based on. The team of Casper API is developing a single infrastructure of decentralized cloud storage that could be used by all existing DApps and by those that are not yet created, regardless of the platform they rely on (Ethereum, EOS, Graphene, Waves or any other).

It means that the blockchain-based decentralized storage services placed their bets on a certain market niche that can receive a boost over next years, as the crypto economy grows. This niche should not be left out of account: decentralized repositories have enough potential to become the main force behind the growth of the crypto economy, and the base for virtual storage of other assets as well.

We welcome comments that advance the story directly or with relevant tangential information. We try to block comments that use offensive language, all capital letters or appear to be spam, and we review comments frequently to ensure they meet our standards. If you see a comment that you believe is irrelevant or inappropriate, you can flag it to our editors by using the report abuse links. Views expressed in the comments do not represent those of Coinspeaker Ltd.