文章目录
  1. 1. TiDB (NewSQL) tutorial
  2. 2. Newest Documents
  3. 3. Features :
  4. 4. Architecture
  5. 5. Software Installation
  6. 6. Make a TiDB Cluster
    1. 6.1. Host lists
    2. 6.2. Start PD cluster
    3. 6.3. Start Tikv Cluster
    4. 6.4. Start TiDB Server
  7. 7. Summary

TiDB (NewSQL) tutorial

TiDB is a distributed SQL database. Inspired by the design of Google F1.
TiDB supports the best features of both traditional RDBMS and NoSQL.
TiKV is a distributed Key-Value database powered by Rust and Raft.

注:

TiDB 是有 PingCAP 公司出品的一款分布式SQL数据库. 具有很多优秀的特性, 且完全开源.
目前还没有Production Ready. 不过应该很快就会有GA. 本文为我们对TiDB做的一个初步安装部署,
测试一下相关的特性.希望本文可以帮到那些想尝尝鲜的同学.让你避免一些坑.

KeyWord

Newest Documents

1
2
3
4
5
https://download.pingcap.org/tidb-ansible-doc-cn-1.0-dev.pdf 
https://github.com/pingcap/docs-cn/blob/master/op-guide/binary-deployment.md
https://github.com/pingcap/docs-cn/blob/master/op-guide/recommendation.md
https://github.com/pingcap/docs-cn/blob/master/op-guide/migration.md
https://github.com/pingcap/docs-cn/blob/master/op-guide/tune-tikv.md

Features :

+ Horizontal scalability
+ Asynchronous schema changes
+ Consistent distributed transactions
+ Compatible with MySQL protocol
+ Multiple storage engine support
+ NewSQL over TiKV
+ Written in Go

Architecture

  • Cluster Level of TiDB :
    • TiDB cluster consists of TiDB Servers , TiKV storage, PD cluster
    • TiDB is the SQL layer and Compatible with MySQL protocol
    • TiKV is the Storage , where the data really was
    • PD is used to manage and schedule the TiKV cluster. MetaData is here
      TiDB architecture
  • TiKV
    • TiKV Cluster consists serveral Nodes
    • One Rocksdb in one Node
    • Data store in Region , region size 64M(can set other value)
    • Data consistency guarantee through Raft consensus algorithm. One Raft group contain 3(configurable) regions
    • Multi-raft protocol used in TiKV

tikv-server software stack

Software Installation

  • TiDB , PD developed in Go

    ## Install Go 1.7.x
    ## TiDB
    git clone https://github.com/pingcap/tidb.git $GOPATH/src/github.com/pingcap/tidb
    cd $GOPATH/src/github.com/pingcap/tidb
    make
    ## PD
    git clone https://github.com/pingcap/pd.git $GOPATH/src/github.com/pingcap/pd
    make build
    
  • RocksDB
    RocksDB must be installed before the TiKV’s Installation, or the Tikv can not be installed successfully.
    Using this docs to install

    wget source_url()
    # install gflag & snapy & lz4
    yum install -y snappy-devel zlib-devel bzip2-devel lz4-devel
    make shared_lib 
    make install-shared 
    # guarantee tikv can find rocksdb.
    ldconfig
    

    Careful:

    • RocksDB’s Version >= 4.12 , master version is OK
    • GCC 4.8 is perfect , 4.7 may meet some error like this:

      see also:C++11 ‘yield’ is not a member of ‘std::this_thread’

    • Install GCC 4.8 on Centos 7.x

      # 48 
      wget http://people.centos.org/tru/devtools-2/devtools-2.repo -O /etc/yum.repos.d/devtools-2.repo
      yum install devtoolset-2-gcc devtoolset-2-binutils devtoolset-2-gcc-c++
      scl enable devtoolset-2 bash
      
  • TIKV developed in Rust

    ## Install rust (nightly)
    curl -sSf https://static.rust-lang.org/rustup.sh | sh -s -- --channel=nightly  
    ## or download the tarball 
    ## dowload the source code
    git clone https://github.com/pingcap/tikv.git  /root/tikv
    cd  /root/tikv
    ## build
    make && make install
    

    Careful:

    if you met some error in the first time . when you do it again ,
    you should do clean first ( cargo clean )
    

Make a TiDB Cluster

After we install the software we needed , Now we will make A cluster with three nodes.

According to the document of making Cluster , The start-up sequence is PD -> TiKV -> TiDB.

Host lists

xx0.x2x.8x.64 
xx0.x2x.8x.65
xx0.x2x.8x.66

Start PD cluster

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
./pd-server --cluster-id=1 \
--name=pd1 \
--data-dir=/log/tidb/pd \
--client-urls="http://xx0.x2x.8x.64:2379" \
--peer-urls="http://xx0.x2x.8x.64:2380" \
--initial-cluster="pd1=http://xx0.x2x.8x.64:2380,pd2=http://xx0.x2x.8x.65:2380,pd3=http://xx0.x2x.8x.66:2380"
#
./pd-server --cluster-id=1 \
--name=pd2 \
--data-dir=/log/tidb/pd \
--client-urls="http://xx0.x2x.8x.65:2379" \
--peer-urls="http://xx0.x2x.8x.65:2380" \
--initial-cluster="pd1=http://xx0.x2x.8x.64:2380,pd2=http://xx0.x2x.8x.65:2380,pd3=http://xx0.x2x.8x.66:2380"
#
./pd-server --cluster-id=1 \
--name=pd3 \
--data-dir=/log/tidb/pd \
--client-urls="http://xx0.x2x.8x.66:2379" \
--peer-urls="http://xx0.x2x.8x.66:2380" \
--initial-cluster="pd1=http://xx0.x2x.8x.64:2380,pd2=http://xx0.x2x.8x.65:2380,pd3=http://xx0.x2x.8x.66:2380"

Start Tikv Cluster

## execute the following command on everynode    
./tikv-server --config /root/tidb/tikv/etc/tikv.toml --log-file=/tmp/tikv.log  & 

Start TiDB Server

Using the following command to start One TiDB server

## execute
./tidb-server --store=tikv \
        --path="xx0.x2x.8x.64:2379,xx0.x2x.8x.65:2379,xx0.x2x.8x.66:2379?cluster=1"\
        --log-file=/tmp/tidb.log & 

you can start one tidb on every node .

Now you can using TiDB like using MySQL

mysql -h xx0.x2x.8x.64 -P 4000 -u root -D test 

Summary

整个过程中, 也遇到一些问题 和 Bug 也都提交了issue. 其他内容,等我们测试评估之后再来写吧.
文章目录
  1. 1. TiDB (NewSQL) tutorial
  2. 2. Newest Documents
  3. 3. Features :
  4. 4. Architecture
  5. 5. Software Installation
  6. 6. Make a TiDB Cluster
    1. 6.1. Host lists
    2. 6.2. Start PD cluster
    3. 6.3. Start Tikv Cluster
    4. 6.4. Start TiDB Server
  7. 7. Summary