Taking strong interest in blockchain, cryptocurrencies, and IoT, Tatsiana Yablonskaya got deep understanding of the emerging techs believing in their potential to drive the future.
BigchainDB looks like a decentralised scalable database capable of one million writes per second.
“BigchainDB fills a gap in the decentralization ecosystem: a decentralized database, at scale. It points to performance of 1 million writes per second throughput, storing petabytes of data, and sub-second latency,” says the official website. Let’s clear up!
BigchainDB took a big data distributed database and gave it such blockchain characteristics as decentralized control, immutability, creation and movement of digital assets. Efforts resulted in a decentralised database capable of one million writes per second and petabytes of capacity.
According to the BigchainDB white paper “its permissioning system enables conﬁgurations ranging from private enterprise blockchain databases to open, public blockchain databases. BigchainDB is complementary to decentralised processing platforms like Ethereum, and decentralised ﬁle systems like InterPlanetary File System (IPFS)”.
The paper mentions technology perspectives that have contributed to the BigchainDB design: traditional blockchains, distributed databases, and a case study of the domain name system (DNS). Adding blockchainlike characteristics to the distributed database became possible due to the concept of blockchain pipelining being a key to scalability.
BigchainDB was launched on the basis of digital art ownership platform Ascribe, where co-founders Bruce Pon, Trent McConaghy and Masha McConaghy began to encounter scalability problems. McConaghy comes from a machine learning background. Besides, the team used the experience of big data databases and protocol engineering to create BigchainDB.
The team started with one distributed database called RethinkDB, one of the most popular databases that nobody has heard of, but extremely powerful. BigchainDB’s creators predict multiple blockchain databases to appear in the next five years. After the release of the white paper BigchainDB has already got hundreds of enquiries.
“BigchainDB obviates the need to have a data stored within the Ethereum because it’s inherently inefficient and requires additional code to make the data queryable. The ideal stack is to have an Ethereum smart contract layer running in their Virtual machine, with BigchainDB as their blockchain database that can house tokens for ether, tickets, serials of physical goods or casino chips. Whatever you want to track in your system in a decentralised way.”
Bruce Pon, co-founder of BigchainDB, told IBTimes: “On top of that we built a federation where each node votes on every transaction, which is a layer on top of a distributed database, so that every transaction has to have a certain quorum of votes for a transaction to pass. Such a federated model could generally have a minimum of between five up to 60 or so validating nodes,” he said.
“The second innovation is what we call pipelining where we laid out all the transactions in a row and then we validate them after. So you could write as many transactions as you want and then very miliseconds afterwards it gets validated. That allows you to write as fast as you can and validate afterwards.”