Install Dependencies
- Install Go
- Install Mysql 5.6 or install Percona Server
- Install the following other tools needed to build and run Vitess:
- make
- automake
- libtool
- memcached
- python-dev
- python-virtualenv
- python-mysqldb
- libssl-dev
- g++
- mercurial
- git
- pkg-config
- bison
- curl
- unzip
These can be installed with the following apt-get command
sudo apt-get install make automake libtool memcached python-dev python-virtualenv python-mysqldb libssl-dev g++ mercurial git pkg-config bison curl unzip
- install JDK
Build Vitess
- Set WORKSPACE environment variable
$WORKSPACE = /opt/vt/
- Clone Vitess
cd $WORKSPACE
git clone https://github.com/youtube/vitess.git src/github.com/youtube/vitess
cd src/github.com/youtube/vitess
- Set the MYSQL_FLAVOR environment variable.
export MYSQL_FLAVOR=MySQL56
- Run mysql_config –version and confirm that you are running the correct version of MariaDB or MySQL. The value should be 5.6.x for MySQL.
- Build Vitess using the ./bootstrap.sh commands.
- If error occurs with zookeeper download and extract archive, you can download it manually and copy it in this $VTROOT/dist path and comment bellow code in line 38 at bootstrap.sh
wget http://apache.cs.utah.edu/zookeeper/zookeeper-$zk_ver/zookeeper-$zk_ver.tar.gz && \
- If install successfully done, you should see bellow message at at end of bottstrap.sh messages
example output:
skipping zookeeper build
go install golang.org/x/tools/cmd/cover …
Found MariaDB installation in …
skipping bson python build
creating git pre-commit hookssource dev.env in your shell before building
- If step 11 is unsuccessful and error is in install package with python pip we solve this problem in below steps
* Because gRPC installs new instance of python in its own path, we should install gprc, protobuf, zookeeper in /usr/ directory and for this, we should do some steps befor run new bootstrap version
+ Set password for root user
sudo passwd root
- Change user to root
su root
- Set
/urs/
directory permission to 777
chmod 777 -R /usr/
- Change sudo file permision
a. chown root:root /usr/bin/sudo
b. chmod 4755 /usr/bin/sudo
c. mv -iv /etc/sudo.conf /etc/sudo.conf.old
- Copy and replace our new files into vitess directory and execute bootstrap.sh file again
- in this solution vitess use python of server and all library files will be install in
/usr/
path
- After run vitess successfully you should make vitess use this command
make build
- If you encourage problem with go, in download from go.googlesource.com you should download flowing packages from these links and extract them in given path
1.
Package: net
Download: https://go.googlesource.com/net/+archive/master.tar.gz
Destination: &GOROOT/src/golang.org/x/net
2.
Package: oauth2
Download: https://go.googlesource.com/oauth2/+archive/master.tar.gz
Destination: &GOROOT/src/golang.org/x/oauth2
3.
Package: tools
Download: https://go.googlesource.com/tools/+archive/master.tar.gz
Destination: &GOROOT/src/golang.org/x/tools
4.
Package: gocloud
Download: https://code.googlesource.com/gocloud/+archive/master.tar.gz
Destination: &GOROOT/src/google.golang.org/cloud
5.
Package: google-api-go-client
Download: https://code.googlesource.com/google-api-go-client/+archive/master.tar.gz
Destination: &GOROOT/src/google.golang.org/api
Run Tests
- After build vitess successfully, If you want only to check that Vitess is working in your environment, you can run a lighter set of tests
make site_test
Start a Vitess cluster
- Set VTROOT to the parent of the Vitess source tree. For example, if you ran make build while in /opt/vt/src/github.com/youtube/vitess, then you should set:
export VTROOT=/opt/vt
- Set VTDATAROOT to the directory where you want data files and logs to be stored. For example:
export VTDATAROOT = $VTROOT/vtdataroot
- Servers in a Vitess cluster find each other by looking for dynamic configuration data stored in a distributed lock service. The following script creates a small ZooKeeper cluster:
cd $VTROOT/src/github.com/youtube/vitess/examples/local
vitess/examples/local ./zk-up.sh
example output :
Starting zk servers
Waiting for zk servers to be ready
After the ZooKeeper cluster is running, we only need to tell each Vitess process how to connect to ZooKeeper. Then, each process can find all of the other Vitess processes by coordinating via ZooKeeper.
Each of our scripts automatically sets the ZK_CLIENT_CONFIG environment variable to point to the zk-client-conf.json file, which contains the ZooKeeper server addresses for each cell.
- The vtctld server provides a web interface that displays all of the coordination information stored in ZooKeeper.
vitess/examples/local ./vtctld-up.sh
Starting vtctld
Access vtctld web UI at http://localhost:15000
Send commands with: vtctlclient -server localhost:15999
Open http://localhost:15000 to verify that vtctld is running.
There won’t be any information there yet, but the menu should come up, which indicates that vtctld is running.
The vtctld server also accepts commands from the vtctlclient tool, which is used to administer the cluster. Note that the port for RPCs (in this case 15999) is different from the web UI port (15000). These ports can be configured with command-line flags, as demonstrated in vtctld-up.sh.
List available commands
$VTROOT/bin/vtctlclient -server localhost:15999 Help
- The vttablet-up.sh script brings up three vttablets, and assigns them to a keyspace and shard according to the variables set at the top of the script file.
vitess/examples/local$ ./vttablet-up.sh
Output from vttablet-up.sh is below
Starting MySQL for tablet test-0000000100…
Starting vttablet for test-0000000100…
Access tablet test-0000000100 at http://localhost:15100/debug/status
Starting MySQL for tablet test-0000000101…
Starting vttablet for test-0000000101…
Access tablet test-0000000101 at http://localhost:15101/debug/status
Starting MySQL for tablet test-0000000102…
Starting vttablet for test-0000000102…
Access tablet test-0000000102 at http://localhost:15102/debug/status
After this command completes, refresh the vtctld web UI, and you should see a keyspace named test_keyspace with a single shard named 0. This is what an unsharded keyspace looks like.
If you click on the shard box, you’ll see a list of tablets in that shard. Note that it’s normal for the tablets to be unhealthy at this point, since you haven’t initialized them yet.
- By launching tablets assigned to a nonexistent keyspace, we’ve essentially created a new keyspace. To complete the initialization of the local topology data, perform a keyspace rebuild
$VTROOT/bin/vtctlclient -server localhost:15999 RebuildKeyspaceGraph test_keyspace
- Next, designate one of the tablets to be the initial master. Vitess will automatically connect the other slaves’ mysqld instances so that they start replicating from the master’s mysqld. This is also when the default database is created. Since our keyspace is named test_keyspace, the MySQL database will be named vt_test_keyspace.
$VTROOT/bin/vtctlclient -server localhost:15999 InitShardMaster -force test_keyspace/0 test-0000000100
example output:
master-elect tablet test-0000000100 is not the shard master, proceeding anyway as -force was used
master-elect tablet test-0000000100 is not a master in the shard, proceeding anyway as -force was used
Note: Since this is the first time the shard has been started, the tablets are not already doing any replication, and there is no existing master. The InitShardMaster command above uses the -force flag to bypass the usual sanity checks that would apply if this wasn’t a brand new shard.
After running this command, go back to the Shard Status page in the vtctld web interface. When you refresh the page, you should see that one vttablet is the master and the other two are replicas.
You can also see this on the command line:
$VTROOT/bin/vtctlclient -server localhost:15999 ListAllTablets test
example output:
test-0000000100 test_keyspace 0 master localhost:15100 localhost:33100 [] test-0000000101 test_keyspace 0 replica localhost:15101 localhost:33101 [] test-0000000102 test_keyspace 0 replica localhost:15102 localhost:33102 []
- The vtctlclient tool can be used to apply the database schema across all tablets in a keyspace. The following command creates the table defined in the create_test_table.sql file:
Make sure to run this from the examples/local dir, so it finds the file.
vitess/examples/local $VTROOT/bin/vtctlclient -server localhost:15999 ApplySchema -sql "$(cat create_test_table.sql)" test_keyspace
- Vitess uses vtgate to route each client query to the correct vttablet. This local example runs a single vtgate instance, though a real deployment would likely run multiple vtgate instances to share the load.
vitess/examples/local$ ./vtgate-up.sh
Stop Vitess cluster
- The client.py file is a simple sample application that connects to vtgate and executes some queries. To run it, you need to either:
- Add the Vitess Python packages to your PYTHONPATH.
or - Use the client.sh wrapper script, which temporarily sets up the environment and then runs client.py.
vitess/examples/local$ ./client.sh
example output:
Inserting into master…
Reading from master…
(1L, ‘V is for speed’)
Reading from replica…
(1L, ‘V is for speed’)
- Each -up.sh script has a corresponding -down.sh script to stop the servers.
vitess/examples/local$ ./vtgate-down.sh
vitess/examples/local$ ./vttablet-down.sh
vitess/examples/local$ ./vtctld-down.sh
vitess/examples/local$ ./zk-down.sh
- Note that the -down.sh scripts will leave behind any data files created. If you’re done with this example data, you can clear out the contents of VTDATAROOT:
cd $VTDATAROOT
/path/to/vtdataroot$ rm -rf *