1. Download comunity edition and install deb file
  2. Start Server
voltdb create
  1. Execute voltdb cli
  1. Create Table
Create table test_table ( column1 varchar(10));
  1. Inser new row
insert into test_table values('test data');
  1. Query Table
select * from test_table;

Data Storage

After restart voltdb server, all saved data will be clear.

Reserved TCP Ports

Default Port Description
21212 Client Port
21211 Admin Port
8080 Web Interface Port (httpd)
3021 Internal Server Port
4560 Log Port
9090 JMX Port
5555 Replication Port
7181 Zookeeper port


EBook Fast Data

VoltDB Documentation

Reading time: 1 min

Install Dependencies

  1. Install Go
  2. Install Mysql 5.6 or install Percona Server
  3. Install the following other tools needed to build and run Vitess:
  • make
  • automake
  • libtool
  • memcached
  • python-dev
  • python-virtualenv
  • python-mysqldb
  • libssl-dev
  • g++
  • mercurial
  • git
  • pkg-config
  • bison
  • curl
  • unzip

These can be installed with the following apt-get command

sudo apt-get install make automake libtool memcached python-dev python-virtualenv python-mysqldb libssl-dev g++ mercurial git pkg-config bison curl unzip
  1. install JDK

Build Vitess

  1. Set WORKSPACE environment variable

$WORKSPACE = /opt/vt/

  1. Clone Vitess
git clone src/
cd src/
  1. Set the MYSQL_FLAVOR environment variable.
    export MYSQL_FLAVOR=MySQL56
  2. Run mysql_config –version and confirm that you are running the correct version of MariaDB or MySQL. The value should be 5.6.x for MySQL.
  3. Build Vitess using the ./ commands.
  4. If error occurs with zookeeper download and extract archive, you can download it manually and copy it in this $VTROOT/dist path and comment bellow code in line 38 at
wget$zk_ver/zookeeper-$zk_ver.tar.gz && \
  1. If install successfully done, you should see bellow message at at end of messages

example output:
skipping zookeeper build
go install …
Found MariaDB installation in …
skipping bson python build
creating git pre-commit hooks

source dev.env in your shell before building

  1. If step 11 is unsuccessful and error is in install package with python pip we solve this problem in below steps

* Because gRPC installs new instance of python in its own path, we should install gprc, protobuf, zookeeper in /usr/ directory and for this, we should do some steps befor run new bootstrap version
+ Set password for root user

sudo passwd root
  • Change user to root
su root
  • Set /urs/ directory permission to 777
chmod 777 -R /usr/
  • Change sudo file permision
a. chown root:root /usr/bin/sudo
b. chmod 4755 /usr/bin/sudo
c. mv -iv /etc/sudo.conf /etc/sudo.conf.old
  • Copy and replace our new files into vitess directory and execute file again
  • in this solution vitess use python of server and all library files will be install in /usr/ path
  1. After run vitess successfully you should make vitess use this command
make build
  1. If you encourage problem with go, in download from you should download flowing packages from these links and extract them in given path

Package: net
Destination: &GOROOT/src/
Package: oauth2
Destination: &GOROOT/src/
Package: tools
Destination: &GOROOT/src/
Package: gocloud
Destination: &GOROOT/src/
Package: google-api-go-client
Destination: &GOROOT/src/

Run Tests

  1. After build vitess successfully, If you want only to check that Vitess is working in your environment, you can run a lighter set of tests
make site_test

Start a Vitess cluster

  1. Set VTROOT to the parent of the Vitess source tree. For example, if you ran make build while in /opt/vt/src/, then you should set:
export VTROOT=/opt/vt
  1. Set VTDATAROOT to the directory where you want data files and logs to be stored. For example:
export VTDATAROOT = $VTROOT/vtdataroot
  1. Servers in a Vitess cluster find each other by looking for dynamic configuration data stored in a distributed lock service. The following script creates a small ZooKeeper cluster:
cd $VTROOT/src/
vitess/examples/local ./

example output :
Starting zk servers
Waiting for zk servers to be ready

After the ZooKeeper cluster is running, we only need to tell each Vitess process how to connect to ZooKeeper. Then, each process can find all of the other Vitess processes by coordinating via ZooKeeper.
Each of our scripts automatically sets the ZK_CLIENT_CONFIG environment variable to point to the zk-client-conf.json file, which contains the ZooKeeper server addresses for each cell.

  1. The vtctld server provides a web interface that displays all of the coordination information stored in ZooKeeper.
vitess/examples/local ./

Starting vtctld
Access vtctld web UI at http://localhost:15000
Send commands with: vtctlclient -server localhost:15999
Open http://localhost:15000 to verify that vtctld is running.

There won’t be any information there yet, but the menu should come up, which indicates that vtctld is running.

The vtctld server also accepts commands from the vtctlclient tool, which is used to administer the cluster. Note that the port for RPCs (in this case 15999) is different from the web UI port (15000). These ports can be configured with command-line flags, as demonstrated in

List available commands

$VTROOT/bin/vtctlclient -server localhost:15999 Help
  1. The script brings up three vttablets, and assigns them to a keyspace and shard according to the variables set at the top of the script file.
vitess/examples/local$ ./

Output from is below
Starting MySQL for tablet test-0000000100…
Starting vttablet for test-0000000100…
Access tablet test-0000000100 at http://localhost:15100/debug/status
Starting MySQL for tablet test-0000000101…
Starting vttablet for test-0000000101…
Access tablet test-0000000101 at http://localhost:15101/debug/status
Starting MySQL for tablet test-0000000102…
Starting vttablet for test-0000000102…
Access tablet test-0000000102 at http://localhost:15102/debug/status

After this command completes, refresh the vtctld web UI, and you should see a keyspace named test_keyspace with a single shard named 0. This is what an unsharded keyspace looks like.

If you click on the shard box, you’ll see a list of tablets in that shard. Note that it’s normal for the tablets to be unhealthy at this point, since you haven’t initialized them yet.

  1. By launching tablets assigned to a nonexistent keyspace, we’ve essentially created a new keyspace. To complete the initialization of the local topology data, perform a keyspace rebuild
$VTROOT/bin/vtctlclient -server localhost:15999 RebuildKeyspaceGraph test_keyspace
  1. Next, designate one of the tablets to be the initial master. Vitess will automatically connect the other slaves’ mysqld instances so that they start replicating from the master’s mysqld. This is also when the default database is created. Since our keyspace is named test_keyspace, the MySQL database will be named vt_test_keyspace.
$VTROOT/bin/vtctlclient -server localhost:15999 InitShardMaster -force test_keyspace/0 test-0000000100

example output:
master-elect tablet test-0000000100 is not the shard master, proceeding anyway as -force was used
master-elect tablet test-0000000100 is not a master in the shard, proceeding anyway as -force was used

Note: Since this is the first time the shard has been started, the tablets are not already doing any replication, and there is no existing master. The InitShardMaster command above uses the -force flag to bypass the usual sanity checks that would apply if this wasn’t a brand new shard.

After running this command, go back to the Shard Status page in the vtctld web interface. When you refresh the page, you should see that one vttablet is the master and the other two are replicas.

You can also see this on the command line:

$VTROOT/bin/vtctlclient -server localhost:15999 ListAllTablets test

example output:
test-0000000100 test_keyspace 0 master localhost:15100 localhost:33100 [] test-0000000101 test_keyspace 0 replica localhost:15101 localhost:33101 [] test-0000000102 test_keyspace 0 replica localhost:15102 localhost:33102 []

  1. The vtctlclient tool can be used to apply the database schema across all tablets in a keyspace. The following command creates the table defined in the create_test_table.sql file:

Make sure to run this from the examples/local dir, so it finds the file.

vitess/examples/local $VTROOT/bin/vtctlclient -server localhost:15999 ApplySchema -sql "$(cat create_test_table.sql)" test_keyspace
  1. Vitess uses vtgate to route each client query to the correct vttablet. This local example runs a single vtgate instance, though a real deployment would likely run multiple vtgate instances to share the load.
vitess/examples/local$ ./

Stop Vitess cluster

  1. The file is a simple sample application that connects to vtgate and executes some queries. To run it, you need to either:
  • Add the Vitess Python packages to your PYTHONPATH.
  • Use the wrapper script, which temporarily sets up the environment and then runs
vitess/examples/local$ ./

example output:
Inserting into master…
Reading from master…
(1L, ‘V is for speed’)
Reading from replica…
(1L, ‘V is for speed’)

  1. Each script has a corresponding script to stop the servers.
vitess/examples/local$ ./
vitess/examples/local$ ./
vitess/examples/local$ ./
vitess/examples/local$ ./
  1. Note that the scripts will leave behind any data files created. If you’re done with this example data, you can clear out the contents of VTDATAROOT:
/path/to/vtdataroot$ rm -rf *


Reading time: 7 min


Download Apache Solr from this link.
Unzip the Solr release and change your working directory to the subdirectory where Solr was installed

ls solr*
unzip -q
cd solr-5.3.1/

To launch Solr, run: bin/solr start -e cloud -noprompt

/solr-5.3.1:$ bin/solr start -e cloud -noprompt

Welcome to the SolrCloud example!

Starting up 2 Solr nodes for your example SolrCloud cluster.

Started Solr server on port 8983 (pid=8404). Happy searching!

Started Solr server on port 7574 (pid=8549). Happy searching!

SolrCloud example running, please visit http://localhost:8983/solr

/solr-5.3.1:$ _

You can load the Solr Admin UI in your web browser: http://localhost:8983/solr/

Indexing Data

Solr server is up and running, but it doesn’t contain any data. The Solr install includes the bin/post* tool in order to facilitate getting various types of documents easily into Solr from the start. We’ll be using this tool for the indexing examples below.

bin/post -c gettingstarted docs/

Here’s what it’ll look like:

/solr-5.3.1:$ bin/post -c gettingstarted docs/
java -classpath /solr-5.3.1/dist/solr-core-5.3.1.jar -Dauto=yes -Dc=gettingstarted -Ddata=files -Drecursive=yes org.apache.solr.util.SimplePostTool docs/
SimplePostTool version 5.3.1
Posting files to [base] url http://localhost:8983/solr/gettingstarted/update...
Entering auto mode. File endings considered are xml,json,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
Entering recursive mode, max depth=999, delay=0s
Indexing directory docs (3 files, depth=0)
POSTing file index.html (text/html) to [base]/extract
POSTing file quickstart.html (text/html) to [base]/extract
POSTing file SYSTEM_REQUIREMENTS.html (text/html) to [base]/extract
Indexing directory docs/changes (1 files, depth=1)
POSTing file Changes.html (text/html) to [base]/extract
3248 files indexed.
COMMITting Solr index changes to http://localhost:8983/solr/gettingstarted/update...
Time spent: 0:00:41.660

The command-line breaks down as follows:

-c gettingstarted: name of the collection to index into
docs/: a relative path of the Solr install docs/ directory


Solr can be queried via REST clients, cURL, wget, Chrome POSTMAN, etc., as well as via the native clients available for many programming languages.

The Solr Admin UI includes a query builder interface – see the gettingstarted query tab at http://localhost:8983/solr/#/gettingstarted_shard1_replica1/query. If you click the Execute Query button without changing anything in the form, you’ll get 10 documents in JSON format (: in the q param matches all documents):

Search for a single term

To search for a term, give it as the q param value in the core-specific Solr Admin UI Query section, replace : with the term you want to find. To search for “foundation”:

curl "http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=foundation"

You’ll see:

/solr-5.3.1$ curl "http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=foundation"

The response indicates that there are 2,812 hits (“numFound”:2812), of which the first 10 were returned, since by default start=0 and rows=10. You can specify these params to page through results, where start is the (zero-based) position of the first result to return, and rows is the page size.

q=foundation matches nearly all of the docs we’ve indexed, since most of the files under docs/ contain “The Apache Software Foundation”. To restrict search to a particular field, use the syntax “q=field:value”, e.g. to search for foundation only in the name field:

curl "http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=name:foundation"

The above request returns only one document (“numFound”:1) – from the response:


Phrase search

To search for a multi-term phrase, enclose it in double quotes: q=”multiple terms here”. E.g. to search for “CAS latency” – note that the space between terms must be converted to “+” in a URL (the Admin UI will handle URL encoding for you automatically):

curl "http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=\"CAS+latency\""

You’ll get back:

"q":"\"CAS latency\"",
"name":"A-DATA V-Series 1GB 184-Pin DDR SDRAM Unbuffered DDR 400 (PC 3200) System Memory - OEM",
"manu":"A-DATA Technology Inc.",
"cat":["electronics", "memory"],
"features":["CAS latency 3,\t 2.7v"],

You can require that a term or phrase is present by prefixing it with a “+”; conversely, to disallow the presence of a term or phrase, prefix it with a “-“.

To find documents that contain both terms “one” and “three”, enter +one +three in the q param in the core-specific Admin UI Query tab. Because the “+” character has a reserved purpose in URLs (encoding the space character), you must URL encode it for curl as “%2B”:

curl http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=%2Bone+%2Bthree


Solr Quick Start

Reading time: 4 min
  1. Register in IBM web And Download Express-C edition via proxy web site.
  2. Download Manuals Books
  3. Extract v10.5_linuxx64_nlpack.tar.gz to /opt/ibm_db2
  4. Execute db2prereqcheck from /opt/ibm_db2 for determine pre requirements of bd2
  5. If operating system of server is 64 bit of ubuntu, I recommend to read this link
  6. After db2prereqcheck matched all dependencies you can run db2install
  7. After installation has been done, the /opt/ibm/ directory and it`s child directories will be created
  8. Then you can run db2setup to install db2 and you can flow instructions in BD2InstallingServers-db2ise1051.pdf page 125
  9. After setup successfully done, user db2inst1 has been created.
  10. In terminal login this user using command sudo su db2inst1
  11. Start the database manager by entering the db2start command.
  12. Enter the db2sampl command to create the SAMPLE database. This command can take a few minutes to process. There is no completion message; when the command prompt returns, the process is complete.The SAMPLE database is automatically cataloged with the database alias SAMPLE when it is created.
  13. In terminal execute db2 and then enter the following
connect to sample
select * from staff where dept = 20
connect reset
  1. This database management system, stores data on file and database restart will not affect on data storage.
Reading time: 1 min

Installatin Steps

wget -O aerospike.tgz ''
tar -xvf aerospike.tgz
cd aerospike-server-community-*-ubuntu12*
sudo ./asinstall # will install the .rpm packages
sudo service aerospike start

Write Key Value

ascli put

ns the namespace for the record being written.
The set is the set for the record being written.
The key is the key for the record being written. It can be either an > integer or string.
The record is a JSON object representing the record to be written. The JSON object’s keys are the bin names and the value are bin values.

Reading Key Value

ascli get

Reading All Values

  1. Enter aql command in terminal
  2. Enter bellow query
select * from

Start, Stop and Restart aerospike service

sudo service aerospike start
sudo service aerospike stop
sudo service aerospike restart

Data Storage

after restart service, all stored data will be cleaned.

Reserved TCP Ports

Name Default Port Description
Service 3000 Application, Tools, and Remote XDR use the Service port for database operations and cluster state.
Fabric 3001 Intra-cluster communication port. Replica writes, migrations, and other node-to-node communications use the Fabric port.
Mesh Heartbeat 3002 Heartbeat protocol ports are used to form and maintain the cluster. (Only one heartbeat port may be configured.)
Multicast Heartbeat 9918 Heartbeat protocol ports are used to form and maintain the cluster. (Only one heartbeat port may be configured.)
Info 3003 Telnet port that implements a plain text protocol for administrators to issue info commands. For more information, see asinfo documentation.
XDR Service1 3004 (Optional) Port to query health state of the Cross Datacenter Replication (XDR) client.


Aerospike Install
Aerospike Networks

  1. Ensure that all Application and XDR nodes can communicate to the service port on all Aerospike nodes. Also ensure that each node can communicate over the configured heartbeat and fabric ports. 
Reading time: 1 min