XLN stands for eXchange Liquidity Network, it's an OpenDAX network of peer exchanges.
XLN aggregates liquidity and price feed from a huge number of sources and shares these with connected users.
The next generation of XLN will spread this principle to decentralized platforms and shift the market to the next level.
Blockchains validate transactions on so-called layer-1. The higher the load of transactions on the blockchain, the more expensive they are.
Layer-2 scaling solutions allow performing transactions faster and chipper. State channels layer-2 solution does not require node validation for every transaction. A state channel comprises a set of open-source protocols, smart contracts, interfaces, and software that enable blockchain applications to run "off-chain" on state channel networks. The layer-1 blockchain validates only the final state after multiple transactions.
XLN is a step forward for such scaling solutions. We could call it “layer-3”. It will connect multiple exchange platforms in a broad network, that provides price feed and shares liquidity.
The network will allow high-frequency trading and will be accessible for multiple layer-1 blockchains through state channels.
XLN provides a protocol for connecting different exchange platforms to a network.
Built on XLN applications will get a high-frequency matching engine that utilizes all the network liquidity and a respective price feed.
End-users of connected platforms will get fast decentralized P2P exchange. But this DEX will embrace liquidity from the entire network.
network of opted-in nodes and connected external DEX platforms
oracle-like and forex brokerage FIX protocol-like price feed
P2P network with aggregated liquidity and censorship protection based on DEX
network of layer-2 state channels with super high-speed trading and settling
layer-2 consensus-free transactions that do not require blockchain fees
the upper layer for blockchain operations
XLN will aggregate price and liquidity from multiple decentralized sources, provide chipper and faster services, and thus will make centralized exchanges obsolete.
Sharding is a method for distributing a single dataset (further Orderbook) across multiple databases (further Shards), which can then be stored on multiple machines. This allows for larger Orderbooks to be split in smaller chunks and stored in multiple data nodes, increasing the total storage capacity of the system.
Similarly, by distributing the data across multiple machines, a sharded Orderbook can handle more requests than a single machine can.
Sharding is a form of scaling known as horizontal scaling or scale-out, as additional nodes are brought on to share the load. Horizontal scaling allows for near-limitless scalability to handle big data and intense workloads. In contrast, vertical scaling refers to increasing the power of a single machine or single server through a more powerful CPU, increased RAM, or increased storage capacity.
Sharding allows you to scale your Node's Orderbook to handle increased load to a nearly unlimited degree by providing increased read/write throughput, storage capacity, and high availability. Let’s look at each of those in a little more detail.
- Increased Read/Write Throughput — By distributing the dataset across multiple shards, both read and write operation capacity is increased as long as read and write operations are confined to a single shard.
- Increased Storage Capacity — Similarly, by increasing the number of shards, you can also increase overall total storage capacity, allowing near-infinite scalability.
- High Availability — Finally, shards provide high availability in two ways. First, since each shard is a replica set, every piece of data is replicated. Second, even if an entire shard becomes unavailable since the data is distributed, the database as a whole still remains partially functional, with part of the schema on different shards.
Yellow uses the Ambient Peer Discovery process in which a peer starts seeking for peers with broadcasting a bootstrap node, and different exchanges in the network connect to the node due to markets (topics) they interested in. This is done with distributed hash tables.
Using the Gossip protocol each peer also is connected to exchanges with different markets and is ready to participate in other trades.
Peers in the network can easily subscribe and unsubscribe from the topic, representing a classical publish/subscribe system.
Any node in the network must be updated to the state of the entire network from the start and on.
Worth noting, that each Finex node still works as a local matching engine and can work without connecting to the peer-tp-peer network.
On the start, a Finex node loads orders from its database, publishes its topics, and reach peer exchanges in “rendezvous points” i.e. in the points where markets match. After this the node continually broadcasts orderbooks of the market to the network. And then continually gets orderbook snapshots from peers.
Each node market is composed of P2P topic, local orderbook, and network orderbook. Whenever a market starts, the node subscribes to the topic via the market ID and starts receiving and broadcasting all messages on the topic. So when a user opens an order or cancels it, the corresponding message is published to the P2P network.
Order updates are sent via the following layers:
- Local WebSocket API.
- Local P2P Gossipsub adapter.
- P2P Gossipsub Mesh.
- Remote Gossipsub adapter.
- Remote WebSocket API.
Matched orders are confirmed using RPC calls to the origin of a given order, and thus the order status is up-to-date and secured from double matching.
The system is fully decentralized, all peers participate in delivering messages throughout the network.
- Reliability: All messages get delivered to all peer subscribed to the topic.
- Speed: Messages are delivered remarkably fast.
- Efficiency: The network is not flooded with excess copies of messages.
- Resilience: There is no point of failure, peers can join and leave the network without disrupting it.
- Scale: Topics can have enormous numbers of subscribers and handle a large throughput of messages.
- Simplicity: The system is simple to understand and implement. Each peer needs to remember only a small amount of state.