Join 10350+ others. No spamming.
I promise!

Follow us at github.



pingcap/tikv

393

pingcap / tikv

Rust

Distributed key value store powered by rust


READ ME

TiKV is a distributed Key-Value database powered by Rust and Raft

Build Status CircleCI Status Coverage Status Project Status

TiKV (The pronunciation is: /'taɪkeɪvi:/ tai-K-V, etymology: titanium) is a distributed Key-Value database which is based on the design of Google Spanner and HBase, but it is much simpler without dependency on any distributed file system. With the implementation of the Raft consensus algorithm in Rust and consensus state stored in RocksDB, it guarantees data consistency. Placement Driver which is introduced to implement sharding enables automatic data migration. The transaction model is similar to Google's Percolator with some performance improvements. TiKV also provides snapshot isolation (SI), snapshot isolation with lock (SQL: select ... for update), and externally consistent reads and writes in distributed transactions. See TiKV-server software stack for more information. TiKV has the following primary features:

  • Geo-Replication TiKV uses Raft and Placement Driver to support Geo-Replication.

  • Horizontal scalability With Placement Driver and carefully designed Raft groups, TiKV excels in horizontal scalability and can easily scale to 100+ TBs of data.

  • Consistent distributed transactions Similar to Google's Spanner, TiKV supports externally-consistent distributed transactions.

  • Coprocessor support Similar to Hbase, TiKV implements the coprocessor framework to support distributed computing.

  • Working with TiDB Thanks to the internal optimization, TiKV and TiDB can work together to be the best database system that specializes in horizontal scalability, support for externally-consistent transactions, as well as a focus on supporting both traditional RDBMS and NoSQL.

Required Rust version

Rust Nightly is required.

Tikv-server software stack

This figure represents tikv-server software stack.

image

  • Placement driver: Placement Driver (PD) is the cluster manager of TiKV. PD periodically checks replication constraints to balance load and data automatically.
  • Store: There is a RocksDB within each Store and it stores data into local disk.
  • Region: Region is the basic unit of Key-Value data movement. Each Region is replicated to multiple Nodes. These multiple replicas form a Raft group.
  • Node: A physical node in the cluster. Within each node, there are one or more Stores. Within each Store, there are many Regions.

When a node starts, the metadata of the Node, Store and Region are registered into PD. The status of each Region and Store is reported to PD regularly.

Build

TiKV is a component in the TiDB project, you must build and run it with TiDB and PD together.

If you want to use TiDB in production, see deployment build guide to build the TiDB project first.

If you want to dive into TiDB, see development build guide on how to build the TiDB project.

Next steps

Contributing

See CONTRIBUTING for details on submitting patches and the contribution workflow.

License

TiKV is under the Apache 2.0 license. See the LICENSE file for details.

Acknowledgments

  • Thanks etcd for providing some great open source tools.
  • Thanks RocksDB for their powerful storage engines.
  • Thanks mio for providing metal IO library for Rust.
  • Thanks rust-clippy. We do love the great project.