China has an interesting relationship with blockchains and cryptocurrencies. China banned cryptocurrencies and ICOs which caused quite a stir around the world. However, having said that, some of the contributions that they have made to space simply can’t be ignored. In fact, the five projects that we have chosen here for you will show you the true potential of the Chinese blockchain space. Presenting to you, the top 5 Chinese projects of all time (in no particular order).
When you think of “blockchain” and “China”, the first thing that pops into your head is Neo. For a long time, it has been called “China’s Ethereum” and even the “Ethereum Killer.” Having said that, let’s look deeper into the project and see if it is worth all the hype.
Neo, formerly known as Antshares, is a “non-profit community-based blockchain project that utilizes blockchain technology and digital identity to digitize assets, to automate the management of digital assets using smart contracts, and to realize a “smart economy” with a distributed network.”
Shanghai-based blockchain R&D company “OnChain” is the force behind NEO. OnChain and Neo are both helmed by CEO Da Hongfei and CTO Erik Zhang. Research on Neo started around 2014. In 2016, Onchan was listed in the Top 50 Fintech Company in China by KPMG.
So, what Neo plans to do is to usher in the era of Smart Economy. Well, according to the NEO whitepaper, Smart Economy has 3 parts to it:
- Digital Assets.
- Digital Identity.
- Smart Contract.
So, what is a digital asset?
A digital asset is anything and everything that exists in binary format and gives you the right to use it. This “right” is important for it to qualify as a digital asset. Digital assets have existed so far in a centralized environment however, it can be really risky to keep it that way. However, with the blockchain technology, it becomes far easier and safer to own digital assets.
The blockchain technology uses cryptoeconomic features to completely eliminate the need for a third party and to own assets in a decentralized, safe, and trustworthy manner. NEO utilizes two forms of digital assets:
- Global Assets.
- Contract Assets.
Global assets can be identified by all smart contracts and clients and are recognized by the whole system.
Contract assets can only be recognized in specific contracts and cannot be used by anyone else. Think of local tokens like GNT which can be recognized in Golem but not in any other contract like Ontology.
In order for digitization of assets to work, it is necessary to digitize identities as well in an efficient manner.
This is how Wikipedia defines Digital Identity:
A digital identity is information on an entity used by computer systems to represent an external agent. That agent may be a person, organization, application, or device. ISO/IEC 24760-1 defines identity as “set of attributes related to an entity.”
The most widely accepted digital certificate issuance model is the X.509 digital identity standard, based on Public Key Infrastructure, which is exactly what the NEO platform uses. Along with this, the Web of Trust point-to-point certificate issuance mode is supported as well.
The following will be used for Identity verification in NEO:
- Use of facial features.
- Other multi-factor methods.
Smart contracts are automated and self-enforcing contracts which allows you to exchange value in a transparent and credible way while avoiding third parties. These transactions can be traced and are irreversible. Nick Szabo, in 1996, described a smart contract as “ a set of promises, specified in the digital form, including protocols within which the parties perform on these promises…”
According to Szabo, the first known form of the smart contract was the vending machine.
Let’s look at how the vending machine works and then we can draw our parallels with the smart contracts.
- First, the customer will put in the required amount of money needed to buy the product from the vending machine.
- Then you choose the product which you can buy for less than or equal to the amount of money that you have put in.
- The machine gives you that product.
Now, if you go through the steps you will notice two things in particular which display the very essence of smart contracts:
- Each and every step needs to be accounted for. You can’t jump on to the next step without completing the preceding step.
- Secondly, you(the buyer) are interacting directly with the seller(vending machine) without any middleman in between.
Smart Contracts are needed to code Dapps. However, before we go in any further, we must know what Dapps require for high functionality.
What do DAPPs require?
Or, to frame it more specifically, what does a DAPP require to be successful and a hit with the mainstream audience? What are its absolute minimum requirements?
A Dapp should have the mechanism needed to scale up enough to millions of users. This is extremely useful if you want the Dapp to go mainstream. A smart contract platform will need to have scalability support.
A Dapp developer needs to create dapps which will be free for users to use. In essence, no user should have to pay in order to gain the benefits of the Dapp.
If a smart contract platform freezes up during upgradation, then that could be extremely problematic. The platform should allow the developers to upgrade without causing all the applications to freeze up.
Low latency is a computer network that is optimized to process a very high volume of data messages with minimal delay (latency). These networks have been designed to support operations that must have near real-time access to rapidly changing data. A Dapp must have the ability to run as smoothly with the lowest possible latency.
The blockchain must always perform at its highest possible capabilities, but for that to happen it must be extremely versatile. There are several tasks a blockchain does that can be parallelized and executed at once.
Digital signature verification is a good example of a “parallelizable” task. All that you need for signature verification is the key, transaction and the signature. With just three data you can conduct verifications in a parallelized manner.
Having said that though, not all tasks on the blockchain can be parallelized. There are some tasks that absolutely need to be executed in a sequence. Think of transaction execution itself. Multiple transactions can’t be executed in parallel; it needs to be done one at a time to avoid errors like double spends.
Note: Double spending basically means spending the exact same coin on more than one transactions at the same time. This problem is circumnavigated because of miners. In a blockchain, transactions happen only when miners put the transactions in the blocks that they have mined.
So, now that we know what smart contracts are and what is required by Dapps to execute at maximum efficiency, let’s look at the three properties that are absolutely critical for smart contracts to have:
We have already talked about determinism before (while discussing hash functions.) A program is deemed deterministic IF it gives the same output to the same input every single time. Having said that, there may be moments when the program acts in an indeterministic manner:
- If the program calls an indeterministic function in the middle of execution
- The data source that the program is using is indeterministic in nature
- When a program calls another program aka dynamic calling.
Smart contracts must take the necessary steps to terminate executing in a given time limit. In other words, there must be a way to externally “kill” the contract when necessary. The steps that can be taken to ensure this are:
- Turing Incompleteness: A Turing incomplete contract is incapable of making jumps and/or loops. This ensures that the contract can’t enter an endless loop.
- Step/Fee meter: A contract can keep track of the number of steps that it has taken to make sure they don’t exceed a particular step limit. Or, they can also use a fee meter. In a fee meter, a prepaid fee is paid to execute the contract, and each step of the program takes a particular amount of fee to execute. Once the fee has been utilized, the contract stops executing.
- Timer: The contract has a pre-determined timer and it executes for the duration of the timer. Once the time-limit exceeds, the contract stops executing.
Anyone and everyone can access the blockchain and upload a smart contract. However, a lot of risks are associated with this level of freedom. Anyone can knowingly or unknowingly code smart contracts containing virus and bugs.
This is the reason why smart contract platforms must have the isolation property so that it can exist without hampering the whole ecosystem. Hence, smart contracts must be isolated in a sandbox so that they can execute or upgrade safely.
Now that we have seen these features, it is important to know how they are executed. Usually, the smart contracts are run using one of the two systems:
- Virtual Machines: Platforms like Ethereum and Neo use this.
- Docker: Made famous by Fabric.
Let’s compare these two and determine which makes for a better ecosystem. For simplicity’s sake, we are going to compare Ethereum (Virtual Machine) to Fabric (Docker). Let’s compare the three properties:
Virtual Machines: There are no undeterministic functions with the smart contracts and the data is limited to on-chain information only. The virtual machine may execute indeterministic dynamic calls but that’s ok because the data accessed is deterministic in nature.
Docker: The system is designed to be more user-reliant. Meaning, the system is dependent on the user’s honesty and needs to trust them to do the right thing and code deterministic smart contracts.
Virtual Machines: Let’s look at how Ethereum Virtual Machine (EVM) uses the terminable property. Ethereum contracts need “gas” to execute. Each and every step in the contract requires a certain amount of gas. If the contract runs out of “gas” then it terminates/
Docker: Fabric’s Docker uses an in-built timer. Basically, the contract lasts for a particular time-limit and then it goes off. The problem with this is that the timer can change from node to node and each node has its own computational power. This may cause a disparity and risk the overall consensus process.
Virtual Machines: Has good isolation properties by presenting a proper sandbox for the smart contracts to operate in.
Docker: Is namespace-reliant and not capable of proper isolation
As is pretty clear so far, a Virtual Machine approach like Ethereum’s is a more desirable approach than a Docker. Having said that, there is one distinct advantage that it has over its more illustrious peer. Turns out that developers have complete code flexibility in dockers, while that may not really be the case with virtual machines.
Eg. In Ethereum, one needs to learn “solidity” in order to create smart contracts.
This is what Neo wanted to change with their project. What the Neo developers aimed to do was to create a Virtual Machine that could give all the advantages of a VM and also give the code-flexibility of a docker.
It is evident that Neo is inevitably going to be compared to Ethereum. So before we go any further, let’s check some of the similarities:
- Both of them provide a platform for Dapps and ICOs
- Everything on the blockchain happens as a result of asset/token exchange.
- A machine is a deemed Turing-Complete when, given enough resources, it has all the capabilities to solve any kind of problem. Both Neo and Ethereum virtual machines (NeoVM and EVM( are Turing Complete.
Now that we know the similarities, let’s dive deeper into Neo’s unique properties.
NEO and GAS. The Two Tokens
Unlike other smart contract platforms, Neo utilizes two tokens: NEO (formerly known as Antshares, ANS) and GAS (formerly known as Antcoins, ANC).
NEO is the token that gives one right administrative rights within the Neo ecosystem. These “rights” include bookkeeping, NEO network parameter changes etc. There will be a total of 100 million NEO tokens. These tokens can’t be subdivided into decimals with the least possible unit being 1.
So, since there are 100 million NEO tokens in supply, how exactly is it divided? Turns out that token distribution is straight down the middle.
- The first part of 50 million NEO tokens was distributed during the ICO.
- The second part was locked up for a year till October 16 2017 This part is going to be used for the long term development and benefit of the NEO projects. It will be divided into further 4 portion of 10 million, 10 million, 15 million, and 15 million.
The plans of these second portion of NEO tokens are as follows.
- 10 million NEO to be used as motivation for the NEO council to do a good administrative job.
- 10 million NEO will be used to incentivize the developers withing the Neo ecosystem to constantly upgrade the system.
- 15 million of the tokens will be used to invest in the other blockchain projects that are owned by the NEO council.
- 15 million NEO tokens will be used for contingency in case of emergency situations.
So, if the NEO tokens are simply meant for administrative and voting purposes, what actually powers the smart contracts?
Well, for this, we have the second token called “GAS.” GAS powers the smart contracts, it is used as currency within the ecosystem and will be used as economic incentive for the projects that are working inside NEO.
Unlike the NEO tokens, the GAS tokens are divisible and could go down to 0.00000001.
There is another interesting point of difference.
The NEO tokens, all 100 million of them, have been premined in the genesis block already. However, when it comes to GAS, all the tokens have not yet been generated. The idea is to generate these tokens in accordance with the NEO tokens via a decay algorithm. The NEO tokens and GAS tokens are in an extremely dependent relationship, in the sense that, if NEO tokens move from Address A to Address B, the corresponding GAS tokens will move as well.
The initial GAS generation was 8 GAS per block and then it will gradually reduce down to 1 GAS per year or 1 GAS per 2 million block. At the 44 millionth block, GAS tokens will stop generating.
Let’s look at how the GAS generation algorithm will work:
- 16% of the GAS will be created in the first year.
- 52% will be created in the first four years.
- 80% GAS will be created in the first 12 years.
Neo’s Consensus Mechanism
The consensus mechanism that is utilized by Neo is the dBFT or the Delegated Byzantine Fault Tolerance. A mechanism is called Byzantine Fault Tolerant when it can successfully answer the Byzantine General’s Problem.
Byzantine General’s Problem
In order to get anything done in a peer-to-peer network, all the nodes should be able to come to a consensus. The thing is though, for this system to work, it lays a lot of emphasis on people to act in the best interest of the overall network. However, as we know already, people aren’t really trustworthy when it comes to acting in an ethical manner. This is where the Byzantine General’s problem comes in.
Imagine this situation.
There is an army surrounding a well-fortified castle. The only way that they can win is if they attack the castle together as a unit. However, they are are facing a big problem. The army is far apart from each other and the generals can’t really directly communicate and coordinate the attack and some of the generals are corrupt.
The only thing that they can do is to send a messenger from general to general. However, a lot of things could happen to the messenger. The corrupt generals can intercept the messenger and change the message. So, what can the generals do to make sure that they launch a coordinated attack without relying on the ethics of each individual general? How can they come to a consensus in a trustless way to do what needs to be done?
The delegated Byzantine Fault Tolerance or dBFT shows you a proper method by which you can answer that question. In order to understand how the dBFT system works, let’s consider a hypothetical political scenario.
Imagine that we have a democratic country, and the country has a certain amount of citizens. As with any democracy, these citizens will elect a delegate to represent them. Anyone can try to become a delegate, provided they meet certain conditions. The job of these delegates is pretty simple and straightforward, pass laws that will make the citizens happy, if they are not happy then you lose your job and a new delegate gets voted.
So, how do these delegates pass votes?
- One of the delegates is randomly chosen as a speaker.
- The speaker looks at all the demands that the citizens have made and creates a law.
- A “satisfaction meter” of this law is calculated by the speaker and sent to the delegates.
- The delegates check the speaker’s calculation and see if it matches with theirs or not.
- If 66% of the delegates agree with the speaker’s choice, then the law doesn’t go through and a new speaker is chosen.
So, we have seen how this works wrt a political scenario. However, how does this hold up in the context of the Neo blockchain?
- Citizens: Anyone who owns the NEO tokens.
- Delegates: Bookkeeping nodes. In order to qualify as one, you must satisfy certain conditions. You must have special equipment, dedicated internet connection and a certain amount of GAS.
- Demands of the citizens: Basically all the transactions.
- Law: The block
- Satisfaction Meter: The hash of the block.
The citizens are whoever owns NEO tokens aka ordinary nodes.
In order for a system to classify as Byzantine Fault Tolerance, it should DESPITE having malicious elements. So, how does this system deal with the malicious actors? As you may have guessed so far, there are two important components to this consensus mechanism:
Note: For this example, we are going to look at a scenario where there are 3 delegates and 1 speaker. So overall 4 participants.
First up we have the case of a malicious speaker.
Suppose the speaker has sent a wrong hash to two delegates and an accurate hash to one. This kind of situation can be easily mitigated because of cryptographic hash function properties, in particular, two of them:
- Firstly, hash functions are deterministic. No matter what happens, A upon hashing will ALWAYS give A’.
- Secondly, hash correctness can be easily checked. Meaning, give A and A’ anyone can use the hash function to check if A indeed hashes to A’ or not.
So, the two delegates will simply check the wrong hash with the correct hash, and then return an error. Since the 2 out of 3 delegates are disapproving of the speaker’s choice, it won’t go through.
Suppose the speaker sends out the correct hash to all the delegates, however one of the delegates turns out to be malicious. Suppose two out of the three delegates approve the message and send out a positive response and that one malicious person sends out a negative message? Since the 66% approval rate has already been reached, it doesn’t matter.
The Smart Contract 2.0
Like we have told you before, Neo is bringing forth the Smart Contract 2.0. It has 3 parts to it:
Image Credit: Neo Whitepaper
As the Neo Whitepaper states, the NeoVM or Neo Virtual Machine is a lightweight, general-purpose VM whose architecture closely resembles JVM and .NET Runtime. The virtual machine acts like a virtual CPU which performs the following among others:
- Reads and executes the instructions in the smart contracts
- Process control based on the functionality of the instruction operations.
The InteropService is used to load the:
- Blockchain ledger
- Digital assets
- Digital identity
- Persistent storage area
- Other underlying services.
InteropService will help Neo achieve a sense of interoperability. After scaling, interoperability is often identified as the “next big issue” to solve in blockchain technology. The InteropService is sort of like the virtual machines for the virtual machines which helps the smart contracts to access these services at runtime and achieve advanced functionality. It helps NeoVM achieve interoperability by being ported to any blockchain or non-blockchain system.
Currently, the interoperable service layer provides some APIs for accessing the chain-chain data of the smart contract. The data that it can access are:
- Block information.
- Transaction information
- Contract information.
- Asset information
Neo brings along three more interesting features that are worth looking into.
NeoX adds to the interoperability feature of Neo by using cross-chain interoperability. NeoX is divided into two parts:
- Cross-chain assets exchange protocol
- Cross-chain distributed transaction protocol
Cross-chain assets exchange
NeoX allows multiple participants to exchange assets across different chains and ensures that either all the steps in the entire transaction process succeed or fail together. In order to achieve this functionality, NeoX has been extended on existing double-stranded atomic assets exchange protocols. The beauty of this is that as long as other blockchains can provide simple smart contract functionality, they can be compatible with NeoX
Cross-chain distributed transaction
NeoX makes cross-chain smart contracts possible where a smart contract can different parts of it on multiple chains. This allows Neo smart contracts to have multiple steps of a transaction different blockchains with the uncompromised consistency. This allows intriguing cross-chain collaborations.
Neo utilizes a distributed hash table (DHT) to ensure efficient file storage. This system is called NeoFS and it indexes the data through the contents of the file (which is hashed) rather than the file path (URL) which could be difficult to maintain. NeoFS plans to resolve the balance between redundancy and reliability via cryptoeconomic incentives and establishing backbone nodes.
NeoFS will use the NeoContract system to contribute to the InteropService interoperability service. It will enable smart contracts to store the large files on the blockchain and manage access to those files. It will also combine with the digital identity so that digital certificates used by digital identities can be assigned, sent, and revoked without a central server to manage them.
Quantum computing is a very valid fear for all cryptocurrencies. It is quite possible that quantum-based computing will break RSA and ECC-based cryptographic mechanisms. Neo is introducing NeoQS a lattice-based cryptographic mechanism. It is looking to utilize the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP) algorithms which are known to be extremely quantum resistant.
We took some time to introduce you to Neo because, without any shadow of a doubt, it is the most successful Chinese blockchain project. Neo is backed by the Chinese government and several bigshots like WINGS and Alibaba.
Let’s stay on the topic of scaling. The next project that we are going to introduce to you today has utilized an ingeniuous technique to exponentially increase their overall throughput. The name of the project is Ziliqa and the technique that they are utilizing is sharding.
Why is Scalability Needed?
The first thing that you need to understand is that the cryptospace is struggling with scalability issues. As of right now, Ethereum manages 25 transactions per second, which more than 3 times that of Bitcoin (7 transactions per second), but is still measly compared to other payment solutions. Scalability becomes all the more problematic when you consider that Ethereum is easily the most popular cryptocurrency out there, mainly because of the ICO craze.
Ethereum, as of right now, has 17,788 nodes while Bitcoin has 9927 nodes. Because of critical operations like ICOs, it is all the more important for Ethereum to act in an efficient manner. This issue was brought at the forefront during the Cryptokitties debacle.
Cryptokitties was one of the most well-known examples of Ethereum biting more than they could chew. It was a simple game where users could trade and collect virtual kittens. It became extremely popular, so much so that at one point in time, it was the third highest gas consuming smart contract. The demand for these kitties rose up so high that they ended up clogging the Ethereum blockchain. The number of unconfirmed transactions on Ethereum rose up by a significant amount:
The worst consequence of this transaction delay was that the SophiaTX ICO was postponed by 48 hours.
The sheer popularity of the kitties has asked cryptocurrencies some serious questions. Because of the increasing demand for the kitties, the number of unconfirmed transaction on the blockchain increased exponentially.
Image Credit: Quartz
Ziliqa has two interesting features that it brings to the table:
- Proof of Work and BFT hybrid consensus mechanism
Before we get into that, let’s introduce ourselves to the Ziliqa team.
The Ziliqa Team
The CEO, Xinshu Dong, has a Ph.D. in Computer Science from the National University of Singapore. He’s a cybersecurity expert, responsible for several national security projects in Singapore. His research has also appeared at reputable conferences and in journals.
The Chief Scientific Advisor, Prateek Saxena, is a research professor in computer science at the National University of Singapore and has a Ph.D. in Computer Science from UC Berkeley. He works on blockchains and computer security. His research has influenced the design of browser platforms, web standards and app stores widely used today. He has received several premier awards such as the Top 10 Innovators under 35 (MIT TR35 Asia) in 2017.
Amrit Kumar is the project’s Crypto Lead. He’s a Research Fellow at NUS. He has a Ph.D. from Université Grenoble-Alpes, France and an Engineer’s diploma from Ecole Polytechnique, France.
The Zilliqa’s advisory board includes the prominent figures in the blockchain. They include Loi Luu, Co-founder of Kyber Network; Vincent Zhou, Founding Partner of FBG Capital; Nicolai Oster, Partner at Bitcoin Suisse AG; and Alexander Lipton, Founder, and CEO of StrongHold Labs.
So what exactly does Sharding mean? Suppose there are three nodes A, B and C and they have to verify transaction T. Usually what happens in an ecosystem like Ethereum and Bitcoin is that all these nodes need to verify T at the same time, which is extremely inefficient and another reason why both of them are so slow.
Now, what if this data T was broken down into three components shards: T1, T2, and T3 and A, B, and C were to verify each of these shards simultaneously? Can you imagine how much time the network will save if they do so? That is the power of sharding.
Sharding and Databases
Sharding was originally a term that was used in database systems. Suppose there is a huge and bulky database. Obviously, searching for a specific data on that database is extremely slow. So how does sharding help in that case?
What if you do a horizontal partition on your data and turn them into smaller tables and store them on different database servers?
Now, you might be asking, why a horizontal partition and not a vertical partition?
Think of the way the tables are designed:
Now, if we were to partition this table vertically:
You see what happens? When you vertically partition a table they tend to become two completely different tables altogether.
However, if we were to partition them horizontally:
You see? It is the same table/database but with lesser data. These smaller databases are known as shards of the larger database. Each shard should be identical with the same table structure.
Sharding in Blockchain
Ok, so now let’s understand sharding wrt the blockchain. Like we have already talked about before, the biggest reason why Bitcoin and Ethereum are so slow, is because each and every node must be involved in validation and verification process. So, let’s look at how sharding is going to help us scale up exponentially here.
Before we do that, let’s acquaint ourselves with Merkle Trees.
Eg. in the diagram given above:
Hash 0-0 and Hash 0-1 are parents to the child node Hash 0. Hash 0 will contain the value of both its parent hashes. The “Top Hash” in the diagram above is the root hash of this particular Merkle tree and it can be used to trace down to all the individual hashes.
Alright, now back to sharding.
Let’s refer to the current state of the blockchain as “Global State”, and it is public to everyone. This state root is going to be broken up into shard roots and each of these shard roots will have their own state and represented in the form of a Merkle tree.
So, what happens once sharding is activated? How does the internal mechanics work?
Firstly, as we have already said, the state splits into different shards. Each unique account becomes one shard and accounts in different shards can’t communicate with one another.
Vitalik Buterin uses an interesting analogy to show how sharding works: “Imagine that Ethereum has been split into thousands of islands. Each island can do its own thing. Each of the island has its own unique features and everyone belonging on that island i.e. the accounts, can interact with each other AND they can freely indulge in all its features. If they want to contact with other islands, they will have to use some sort of protocol. “
You are probably wondering, how is this going to change the blockchain?
Before we answer that question, think of what a normal block in the blockchain looks like. It has a header and a body, wherein the body contains all the transactions. This, in essence, creates a single layer of interaction with the transactions.
So, there is a block header and the body which contains all the transactions in the block. The Merkle root of all the transactions will be in the block header.
Now, think about this. Did bitcoin really need blocks? Did it really need a block chain?? Satoshi could have simply made a chain of transactions by including the hash of the previous transaction in the newer transaction, making a “transaction chain” so to speak.
Sharding is going to change this into two levels of interaction.
Each and every shard has its own group of transactions, and the first level is the transaction group. This transaction group is divided into two:
- Transaction group header.
- Transaction group body.
Transaction Group Header
The header is has two distinct left and right sides.
The Left Part:
- Shard ID: Each transaction group belongs to a particular shard. The Shard ID identifies that shard.
- Pre state root: This is the state of the root shard before the transactions were applied.
- Post state root: This is the state of the root shard after the transactions were applied.
- Receipt root: The receipt root after all the transactions in shards are applied.
The Right Part:
This part has all the random validators who need to verify the transactions within the shard and are randomly chosen.
Transaction Group Body
This part contains the IDs of all the transactions which are contained within the shard.
Properties of Level One
- Every transaction contains the Shard ID of the shard that it belongs to.
- Each shard specifies its pre and post state root.
- The transaction group contains all the transactions that are specific to a particular shard.
- Transactions that belong to a particular shard shows that it has occurred between two accounts which are native to it.
The Second Level
Second level contains two roots.
- The state root: Represents the root of the entire blockchain state Merkle Tree.
- The transaction group root: The root of all the transaction groups that are present in side a particular block.
Properties Of Level Two
- Level two accepts transaction groups rather than transactions.
- Transaction group is valid only if:a) Pre state root matches the shard root in the global state.
b) The signatures in the transaction group are all validated.
- If the transaction group gets in, then the global state root automatically changes into the post-state root of that particular shard ID.
If you are familiar with Bitcoin then you must already be aware of the proof-of-work mechanism. This is how it works:
- The miners try to solve cryptographic puzzles to add a block to the blockchain.
- The process requires a lot of effort and computational power.
- The miners then present their block to the bitcoin network.
- The network then checks the authenticity of the block by simply checking the hash, if it is correct then it gets appended to the blockchain.
- So, discovering the required nonce and hash should be difficult, however checking whether it is valid or not should be simple. That is the essence of proof-of-work.
Ziliqa also wanted to use the proof-of-work, because despite its flaws, it has been extensively tested out in real life. Ziliqa decided to add their own twist by combining proof-of-work (POW) with Practical Byzantine Fault Tolerance (PBFT).
POW provides a great defence against Sybill attacks because the it is extremely difficult for malicious actors to create multiple identities with POW. After the node has proven its identity, it gets assigned to a particular shard. Within the shard, the consensus is taken over by PBFT.
Zilliqa uses a hybrid consensus mechanism. When you first start mining, you’ll have to complete a proof of work hash. PoW requires computing power that guarantees that a machine can only operate one node. As such, PoW helps Zilliqa establish identity. It makes it difficult for one bad actor to create multiple identities to overwhelm the network in what’s known as a Sybil attack. The network does not use proof of work for consensus, however.
After a node has proven its identity, it gets assigned to a shard. Within the shards, Zilliqa uses Practical Byzantine Fault Tolerance consensus. To have a very general overview of how PBFT works:
- There is a predefined set of validators who are chosen by a central authority.
- These validators govern the system by agreeing on various things such as transaction verification.
- 66% of the validators need to reach a consensus which is then recorded in the blockchain.
- As long as malicious elements don’t reach more than 33% consensus everything runs seamlessly.
The main selling point of PBFT is that not only is it fast and scalable, it helps the blocks achieve finality as well.
Finality, in very loose terms, means that once a particular operation has been done, it will forever be etched in history and nothing can revert that operation. This is particularly important in fields which deal with finance. Imagine that Alice owns a particular amount of an asset in a company. Just give of some glitch in the company’s processes, she shouldn’t have to revert ownership of that asset.
Ziliqa and Scilia
There are two families of programming languages:
Think of all the programming languages that you are somewhat familiar with thus far. C++, Java, and even Solidity. They are all traditional imperative programming languages aka algorithmic programming languages.
Their approach is simple and straightforward. Put down all the steps that the compiler needs to take in order to execute an operation.
Think of a simple addition program.
int a = 1;
int b = 2;
c= a + b;
See how the approach is here?
- Declare the first integer
- Declare the second integer
- Declare a third integer
- Add the first two integers and store the value in the third integer.
Everything is detailed out in steps.
On the otherhand we have functional languages. This family of programming languages asks itself a simple question.
Is there a way to compress the number of steps required to solve a problem. The answer, is a more functional and declarative approach.
So, how does functional programming work?
Let’s look at a simple addition again.
Suppose there is a function f(x) that we want to use to calculate a function g(x) and then we want to use that to work with a function h(x). Instead of solving all of those in a sequence, we can simply club all of them together in a single function like this:
See how all those steps got condensed into one simple step? This helps immensely in scalability as well This is why the Ziliqa team has developed a new programming language known as Scilla.
Scilla separates state and function. It is a functional programming language that draws a distinction between the communication aspects of a contract – transferring funds or calling another contract – and the actual computational work the contract does.
Unlike solidity, it is not a Turin complete language, however, this very incompleteness allows it to be subject to formal logic proofs. This is critical because proving contracts lets users know a contract is safe in a verifiable way before using it.
If you are involved in any way in the cryptospace then you will have heard of VeChain, especially following the whole VeChain Thor rebranding. So, what is so special about them? VeChain (VET) want to revolutionize the way luxury goods, supply chain, logistics, food, government, and other industries validate that products are indeed leg
This is one of the most intriguing use-cases of the blockchain and it will be interesting to see how they are going to pull it off. In fact, VeChain began as a supply chain company before moving on to the Dapp platform space.
According to the whitepaper, “The vision of VeChain and the VeChainThor Blockchain is to build a trust-free and distributed business ecosystem platform to enable transparent information flow, efficient collaboration, and high-speed value transfers.”
The Need for VeChain
The luxury goods industry is rife with corruption and counterfeits. Not only is it bad for the consumers, but it is near-fatal for the luxury brands as well. The Global Brand Counterfeiting Report 2018 estimates that “the losses suffered due to global online counterfeiting has amounted to 323 Billion USD in the year 2017, with luxury brands incurring a loss of 30.3 billion dollars through internet sales.”
Just look at those numbers again, it is staggering.
Until now, it was widely believed that counterfeiting is an untamable business with high profits and no matter what happens, the counterfeit industry will always adjust its model to profit off unknowing OR willing consumers.
But, the integration of the blockchain technology can end this, and VeChain is looking to do exactly that.
How Does VeChain Work?
There are two components to the VeChain mechanism:
- Smart Chip
The smart chip is created in-house and is used to track products throughout their lifecycle. The smart chip can be implemented in different a variety of items such as wine, luxury bags, food, and many more through a technology called RFID (Radio Frequency Identification).
Using VeChain, you just scan the item’s smart chip to get all of its associated data. This provides businesses with information that’s always current and an accurate account of each item.
Integrating with IoT devices, VeChain also helps with quality control. This is especially useful in the food and agriculture industry where something like a temperature change of a few degrees could ruin an entire product batch.
Let’s give you a real life example of when this very technology in the food industry could have been used to save lives.
Back in October 6, 2006 multiple states in the US suffered a major E-Coli outbreak. The culprit? Spinach.
Around 199 people were affected of whom 22 were children under 5 years old. 31 of the 199 developed a type of kidney failure called hemolytic-uremic syndrome. Ultimately, 3 people died in the outbreak, one of whom was a 2-year-old child.
As a result of this, the entire food industry went into pandemonium. People were desperately trying to trace the source of the infected spinach. Everyone pulled spinach immediately from the market. It took the Food and Drug Administration (FDA) a total of 2 weeks to find the source of the contaminated spinach, for 2 weeks there was no spinach in the market.
The sad part is that this contaminated spinach came from one single lot in one single farm, however, the process was so inefficient it took so much time and money to zero in on it. The blockchain technology could have helped with this in an exponentially shorter amount of time.
There are several topics to be covered here, however, we because we need to be brief here, we will tell you about some of the more interesting features and properties of the ecosystem. Right now, let’s cover one of the building blocks of the VeChain ecosystem, the nodes.
Depending on how much VET you hold and the maturity period, you will have 4 different kinds of nodes.
Note: “Node Maturity Period” is a term used in the VeChain ecosystem, meaning once a wallet has the amount needed to qualify for a certain node and the corresponding amount stored in ‘VeThor Forge’, the built-in function in VeChain wallet, then the Node Maturity Period starts to count.
When the maturity period ends, and the quantity of VET stored in ‘VeThor Forge’ does not drop below the threshold at any given moment, then the node status will be officially designated, and the node reward will start to generate.
The four kind of nodes are as follow:
- Strength Nodes — 10 day maturity period (minimum 10,000 VET)
- Thunder Nodes — 20 day maturity period (minimum 50,000 VET)
- Mjolnir Masternodes — 30 day maturity period (minimum 150,000 VET)
- Thrudheim Masternodes — 12/21/17 maturity start date (minimum 250,000 VET)
Of the above four node types, there is a further two-way classification.
- Economic Node
- Authority Node
Strength, Thunder, and Mjolnir nodes are all examples of economic nodes. They all receive normal rewards as per their stake.
The Thrudheim Masternodes are the authority nodes and there will be 101 of these present in the ecosystem. They get the same rewards as the economic nodes PLUS 30% of all the THOR consumed by the blockchain transactions. The remaining 70% of the THOR gets burnt.
The Dual Token System
By now you must be pretty confused with all the tokens circulating in this ecosystem. Like Neo, VeChain also has a dual token system:
- VeThor Token (VTHO)
- VeChain Tokens (VET)
The purpose behind this Twin-Token design is to maintain and sustain the transaction cost of using VeChain and to keep volatility as low as possible. The VeChain Foundation has the power to adjust the minimum price of the VTHO tokens depending on its supply and demand.
VTHO aka Thor is generated by holding the VET tokens. This is the tokens that will be used to transact within the VeChain ecosystem and as economic incentives. Think of GAS in Neo.
These are the tokens that you will need to hold in the ecosystem to take part in the administration and consensus mechanisms.
The relation between VTHO and VET generation is as such according to their design:
VeChain will generate one block every 10 seconds. For 10K VET there will be 4.32 VTHO generated every 24 hours.
Proof of Authority Consensus
VeChain’s aim is to create the world most used enterprise-grade public blockchain. This is the reason why they don’t consider currently used mechanism such as POW, POS and DPOS to be up to scratch. They have designed their own Proof of Authrority (POA) for the governance needs of VeChainThors consensus protocol. This completely eliminates the need for anonymous block producers
The first step to take part in the consensus mechanism is to quality as an Authority Masternode (AM) aka Thrudheim Masternode. In order to do this, the individual or entities must disclose their identity (and reputation by extension) to the VeChain Foundation in exchange for the rights to apply to be the ones who validate and produce blocks. This is the reason why every AM will go through rigorous Know Your Customer (KYC) procedure to satisfy the Foundations minimum requirements.
So, why is this disclosure of real identity important?
VeChain believes that by keeping their real identities and reputation on the line, the AMs will be held accountable and incentivized to work even more for the betterment of the network as a whole.
VeChain also summarized the main characteristics of the PoA protocol implemented as:
- Low requirement of computational power;
- No requirement of communications between AMs to reach consensus;
- System continuity is independent of the number of available genuine AMs.
Team and Partnerships
Let’s take a look at the team behind VeChain. Sunny Lu is the co-founder and CeO. Throughout his career he has been the lead IT and Information Systems Head for several luxury brands including Louis Vuitton. During his time in Louis Vuitton he discovered the problem behind the validation of luxury goods.
However, when it comes the VeChain, something that is even more impressive than the team is the sheer amount of high value partnerships they have managed to gather.
Probably one of the most convincing aspects of VeChain is their partnership. VeChain has been working overtime on forming lucrative partnership deals. Their two most notable partnerships are PwC and DNV GL.
Being a partner of PwC allows VeChain to gain access to the company’s massive network of clients as as “internal audit to ensure that all their products meet the correct compliance and more.”
The deal between VeChain and DNV GL is not completely clear due to an NDA, however, this much has been revealed that VeChain will be used for the internal tracking for oil, food and more
Having said that, these are still not the most impressive partnerships that VeChain has managed to snag up. Turns out that the Chinese government has chosen VeChain to be the blockchain technology partner of the government of Gui’an. When you consider how the Chinese government has come down on cryptocurrencies and ICOs, that is doubly impressive.
The Elastos ICO raised nearly $100 million 4 months ago. So, you might be wondering, isn’t it a little premature to add this project here right off the bat?
We don’t think so.
While it may be too soon to add this project here, the sheer star power of their advisory board deserves its inclusion in the list. We will get more into these advisory folks later. Before that, let’s get into Elastos and see what makes it special.
What is Elastos?
Elastos is a new blockchain with decentralized peer to peer economic infrastructure that authenticates digital rights and turns digital information into assets. It is the next generation blockchain goes well beyond ethereum and similar blockchain based decentralized platforms.
According to the official Elastos Website “Blockchains are ideal for recording transactions but not for storing data. There is simply not enough space to store a large number of files and the blockchain gets easily congested. To prevent overload elastos provides a flexible main chain & sidechain design structure. The main chain is in charge of necessary transactions and transfer payments whereas the side chains execute smart contracts to supports various decentralized applications and services. The Elastos operating systems run as a highly secure flexible layer around the blockchain to free up more space The Elastos operating systems have been in development for over 18 years. Elastos targets decentralized applications that run on peer to peer network with no centralized control. Elastos is an environment to trade safely digital assets with both creators and users. “
Elastos is a next-generation protocol, an advancement of Etherum smart contract to the next level. But unlike Ethereum, Elastos is not just a decentralized platform. Elastos is, in fact, an operating system, an environment to run large-scale decentralized applications as big as a decentralized version of Facebook or Netflix). According to Ron Chen the creator of Elastos, the Elastos Operating System is very much like a country. The analogy might seem misleading but once you get a solid understanding of Elastos you will find it to be an understatement as it takes the blockchain technology to the next revolutionary steps, almost creating the Digital Republic.
Why is Elastos Needed?
The creators of Elastos saw structural flaws in the foundation of the internet and its security. Their main objective was to create a better internet. Elastos is thus trying to rebuild the internet and become Internet of Wealth. It provides a better way to run apps on the internet by being a Blockchain based operating system. It primarily tackles the problems of digital content ownership. Digital Ownership has always been a flaw in the design of the internet. Elastos is open source, thus anyone can build an app in elastos just like anyone can build a website on the internet. It is built to run large-scale decentralized applications that. Elastos is infinitely scalable and very secure. Dapps that run on the Elastos OS are not connected to the internet directly. Elastos acts as a buffer between the two and thus prevents security issues. There are already large scales decentralized applications like Zappaya, Musicchain, Helix, Ulink etc running on the Elastos Operating Systems. Elastos is the smart web of the future
Elastos is what will power the internet of tomorrow and plans to take the best bits from both Bitcoin and Etherum with upgrades of course. Thus we can say the Elastos ICO (ELA) is a great investment opportunity as of 2018. Bitcoin, the market leader, in cryptocurrency will be sharing its hash rate with Elastos by the end of the year and thus merge mines. ELA can thus be dual-mined with Bitcoin without increasing energy consumption. ELA conducted a token sale in January with a hard cap 2,500 Bitcoin for the distribution of 2,000,000 Elastos tokens (ELA). ELA has a price of $28.17 currently with a market cap of $ $144,746,587.
The Four Pillars of Elastos Ecosystem
The next thing you should know about is, what is called, the four pillars of Elastos’ ecosystem. The following content has been taken from the Elastos GitHub page.
Pillar #1: Blockchain and Smart Contracts
As the operating system’s trusted zone, the blockchain can implement “trust”. The Elastos main chain uses Bitcoin’s POW mechanism to ensure the reliability of data transmission through joint mining with Bitcoin. At the same time, Elastos provides services and extends third-party applications through its side chains.
Pillar #2: Elastos Carrier
Elastos Carrier is a completely decentralized P2P network service platform. For Elastos, it is an important support infrastructure for decentralized application development and operation. It is the Elastos P2P Network Platform part of the architecture diagram.
Pillar #3: Elastos Runtime
Elastos Runtime runs on the user’s equipment to achieve a “reliable runtime environment.” By developing Elastos DApp, independent developers can use (play) digital assets such as digital audio and video playback. VM guarantees digital assets will run under blockchain control, providing users with the ability to consume and invest in digital content.
Pillar #4: Elastos SDK
This is the traditional APP (i.e. Wechat, QQ, Taobao, and other mobile phone software). These APPs can extend their capabilities by introducing the Elastos SDK, gaining typical blockchain abilities like identity authentication and trusted records.
Token Utility of Elastos
- There is oversaturation of websites and domain names on the internet currently. In the Elastos internet, web DApps will need to pay for domain name services using ELA. These domains could be investment assets as well and can be resold later on.
- An Elastos user can get a unique handle which be used throughout the ecosystem. This handle must be paid for using the native ELA tokens. These handles can also be investments which can be sold off later.
- There are over 10,000 unique movies and over 2000 unique tv shows on netflix alone. Imagine creating 10,000 copies of each of the movies content creators want to sell on Elastos platform. They would need to be paid with ELA for acquisition of UUIDs for them.
- In order to utilize the storage service on the Elastos platform to store data, this will have to be paid in ELA as well.
- DApps will be paying in ELA for the services they’ll be using like Domain name registrations, search engines, page rankings, acquisition of UUIDs for digital assets, etc. They’ll all be using ELA which subsides for bandwidth, IPFS, etc
- Those who decide to lock their ELA(minimum of 300) will earn an interest of 4%, 5%, 6% for up to 3 years(not compounded year-to-year).
- ELA will be the main currency that will be used to reward developers for creating dapps on the Elastos platform.
- ELA holders are going to be airdropped with future DApp tokens built on Elastos ecosystem.
- Users can participate in token sale projects and products with ELA within Elastos.
- Apps built on elastos can implement their system to process transactions using ela/sela.
- Those who decide to lock their ELA(minimum of 300) will earn an interest of 4%, 5%, 6% for up to 3 years(not compounded year-to-year). The lockup period was over in February 2018 and people who bought in early have already decided to lock in majority of their ELAs for a certain rate of return on their investment in the following years to come.
- As part of the development efforts, around 16 million of ELA were exclusively reserved for the growth of Elastos ecosystem development. These will be given to community members who contribute to the Elastos Bounty Program. Anyone can contribute to the ever growing ecosystem of Elastos: be it content creators, users, developers, testers, or leaders
Rong Chen (CEO): Rong Chen is the chairman of Elastos foundation. He was also the chairman and CEO of Kortide and a senior software engineer in Microsoft. He has completed his Bachelor degree in Computer Science from Tsinghua University.
Feng Han (CFO): Feng Han is the co-founder and Board Member of Elastos. He is an influential leader in the Chinese blockchain sector, was invited to speak at the CryptoCon 2018 conference in Chicago on February 15, 2018. He was the sole Chinese blockchain representative among more than 50 conference speakers from around the world
This is where we see the good stuff. Elastos’ advisory board is full of absolute superstars.
Jihan Wu: Jihan Wu is the union chief advisor of the Elastos. He is also happens to be the CEO at Bitmain, the infamous ASIC producers and also the company behind Antpool, one of the biggest Bitcoin mining pools in the world. He was ranked one of the ten most influential figures in the blockchain world by Coindesk in 2017. Wu lives in China and has an economics degree from Peking University.
Hongfei Da: Hongfei Da is an independent director, founder, and CEO of NEO. We have already covered Neo in detail so you know how big a deal this is. Hongfei Da and Jihan Wu of Bitmain, both are the early financiers of Elastos.
Joey Lee: He is a Lecturer at Teachers College, Columbia University & Director of Games Research Lab and an advisor of Elastos. He earned his Ph.D. from Penn State University in the field of science and technology and had been a Software Engineer at IBM.
Xuedong Gu: Xueyong Gu, an advisor at Elastos ICO, is a Professor at Tsing Hua University Director of International Relations at iCente.
Ziheng Zhou: He is a Ph.D. holder from the University of Oulu, Department of Computer Science and Engineering and is an Advisor and member of the Academic Committee for Alibaba Research Institute and an advisor of Elastos ICO.
The fifth and last project that we want to introduce you to is Matrix.
Matrix is an open-source blockchain that combines blockchain with AI. It allows its users to execute smart contracts by making the process fast, easier, and safer. It also introduces an interestingly new take on mining. In the usual POW mining, miners just show off their computational powers by solving pointlessly hard puzzles. In matrix, mining is done by solving mathematical problems applicable to the real world.
Along with this, the Matrix team is saying that the platform is faster (10,000 transactions per second), more scalable, and more secure against malicious attacks. So, let’s take a look into the Matrix and see if there is any substance behind the hype.
Smart Contract Auto-Generation
It is very rare that an entrepreneur gets to properly execute their vision because of their lack of coding expertise. It is very rare for an entrepreneur to be a developer, and it is extremely hard to translate your vision to a programmer. Matrix combines blockchain with AI to work around this “vision bottleneck.” As the technical whitepaper states, “no programing expertise is needed any more for designing smart contracts. The unique code generation technique of MATRIX allows automatic conversion of an abstract description of a smart contract into an executable program.”
The only thing that Matrix needs from its users are the core input elements like input, output, and transaction conditions of a contract with a scripting language. The code generator is ingrained with a deep neural network which automatically converts the script into an equivalent program.
Secured Smart Contracts
Because of the very design of blockchains and smart contracts, it can vulnerable to a plethora of attacks.
- Smart contract programs may call functions offered by the host system and/or third-party libraries
- Programs running on different computers in a distributed framework do not provide any guarantee for execution time
Such kind of openness and decentralization are obviously extremely desirable traits, however, like we said, it leads to loads of security problems.
The Matrix blockchain is equipped with a power AI security engine consisting of four major components:
- A semantic and syntactic analysis engine for smart contracts
- Formal verification toolkit which helps prove the security properties of smart contracts.
- Matrix has an AI based detection engine which helps in transaction model identification and security checking.
- Utilizes a deep learning based platform for dynamic security verification and enhancement.
Most of the main cryptocurrencies suffer from acute transaction issues. Like we have said earlier, a transaction needs to be broadcasted to all nodes in a network, the overall latency has to increase as long as more nodes are joining the network. Matrix resolves this issue by dynamically selecting a delegation network in which all nodes are voted as delegates of others.
According to the whitepaper, “All Proof-of-Work (PoW) processing is only allocated inside the delegation network, which only incurs a much smaller latency due to the smaller number of nodes. The selection process is random in the sense that a node is selected with a probability proportional to its Proof-of-Stake (PoS).”
The live version of MATRIX should be able to support a throughput of 100,000 TPS.
Flexible Blockchain Management
One of the most helpful design quality of Matrix is that it is an extremely flexible blockchain. This “flexibility” has been achieved via two methods:
- By offering access controls and routing services so that it can allow seamless integration of the private chains into common public chains. This property is extremely useful for industry and government players for authorizations. At the same time, this allows proper information flow from a public chain to a private chain and vice versa.
- Secondly, Matrix uses reinforcement learning framework to optimize various parameters such as consensus mechanisms and transaction configuration. This helps in the dynamic upgradation of parameters for near-optimal performance without the risk of incurring hardfork.
Value Adding Mining
We have touched onto Matrix’s mining protocol before. Miners in Matrix utilize the Markov Chain Monte Carlo (MCMC) computation, which is an essential tool for Bayesian reasoning. The MCMC plays a key role in numerous big data applications such as gene regulatory network, clinical diagnosis, video analytics, and structural modeling.
Because of this, a distributed network of MCMC computing nodes provides the power of solving real-world compute-intensive problems and thus build a bridge between the values in the physical and virtual worlds.
Comparison With Other Smart Contract Platforms
The following diagram tweeted out by Matrix compares other smart contract platforms with itself:
This guide was made to introduce you to the top 5 Chinese blockchain projects. We hope that you have gained immense value from it!