MongoDB Auto-Sharding Solve the problem of mass storage and dynamic expansion , However, it is far away from the high reliability required by the actual production environment , There is still some distance for high availability , So there is "Replica Sets
+ Sharding" Solutions for .
To build a MongoDB Sharding Cluster, Three roles are required :
Shard Server: mongod example , Used to store actual data blocks , One in the actual production environment shard server Roles can be grouped by several machines relica
set bear , Prevent single point failure of host
Config Server: mongod example , Stored the entire Cluster Metadata, These include chunk information .
Route Server: mongos example , Front end routing , The client is connected here , And make the whole cluster look like a single database , Front end applications can be used transparently .

because mongodb The slave node of has voting function , As long as you ensure that the number of node copies that are down does not exceed half of the total number , Then the replica set can fail over automatically , Therefore, the arbitration node is not considered in this example arbiter( When there is only one 2 Replicas must have quorum nodes ), However, it is recommended that you use it in the production environment , Consider arbitration node , To better ensure high availability .

This example uses 3 Servers ,3 Pieces , Each slice is created 3 Replica sets .

The example architecture is shown below :

Respectively in 3 One machine running mongod example ( be called mongod shard_a_1,mongod shard_a_2,mongod
shard_a_3) organization replica set1, As cluster Of shard_a .

Respectively in 3 One machine running mongod example ( be called mongod shard_b_1,mongod shard_b_2,mongod
shard_b_3) organization replica set2, As cluster Of shard_b .

One for each machine mongod example , As 3 individual config server .

One for each machine mongs process , For client connections .

Each slice 3 The server , Three sets were used in the early stage , Disaster recovery should be considered for the increase of servers in the future , The base number of service increase is at least three ( Or use the dual computer scheme ).

host   IP port information
server1 192.168.100.90 mongod shard_a:10000
mongod shard_b:10001
mongod config:20000
mongs:30000
server2 192.168.100.110 mongod shard_a:10000
mongod shard_b:10001
mongod config:20000
mongs:30000
server3 192.168.110.71 mongod shard_a:10000
mongod shard_b:10001
mongod config:20000
mongs:30000 One , install mongodb( slightly )
Two , to configure
1. Create data directory
Here it is 3 Machine mongodb Create the log directory and data directory under the installation directory ( Replica and configuration )
mkdir -p logs
mkdir -p conf
mkdir -p data/shard_a
mkdir -p data/shard_b
mkdir -p data/config
2. create profile
stay conf Create a configuration file in the directory
shard_a.conf
port=10000 pidfilepath=/home/slim/mongodb-2.6.8/data/shard_a.pid
dbpath=/home/slim/mongodb-2.6.8/data/shard_a directoryperdb=true
logpath=/home/slim/mongodb-2.6.8/logs/shard_a.log logappend=true fork=true
profile=1 slowms = 5 noprealloc=false replSet=shard_a oplogSize=100
shardsvr=trueshard_b.conf
port=100001 pidfilepath=/home/slim/mongodb-2.6.8/data/shard_b.pid
dbpath=/home/slim/mongodb-2.6.8/data/shard_b directoryperdb=true
logpath=/home/slim/mongodb-2.6.8/logs/shard_b.log logappend=true fork=true
profile=1 slowms = 5 noprealloc=false replSet=shard_b oplogSize=100
shardsvr=trueconfig.conf
port=20000 pidfilepath=/home/slim/mongodb-2.6.8/data/config.pid
dbpath=/home/slim/mongodb-2.6.8/data/config directoryperdb=true
logpath=/home/slim/mongodb-2.6.8/logs/config.log logappend=true fork=true
profile=0 configsvr=truemongos.conf
port=30000 logpath=/home/slim/mongodb-2.6.8/logs/mongos.log logappend=true
fork=true maxConns=1000 chunkSize=100
configdb=192.168.100.90:20000,192.168.100.110:20000,192.168.110.71:20000
Three , to configure Replica Set
Start the Shard Server service .
./bin/mongod -f conf/shard_a.conf &
./bin/mongod -f conf/shard_b.conf &
After all start-up is completed , Log in to any one mongodb to configure replSet, as follows :
1. to configure shard_a
Sign in 192.168.100.90:10000
./bin/mongo 192.168.100.90:10000 ongoDB shell version: 2.0.6 connecting to:
192.168.110.71:10000/test > use admin; switched to db admin >
config_shard_a={_id:'shard_a', members:[{_id:0,
host:'192.168.110.71:10000'},{_id:1,
host:'192.168.100.90:10000'},{_id:2,host:'192.168.100.110:10000'}]}; { "_id" :
"shard_a", "members" : [ { "_id" : 0, "host" : "192.168.110.71:10000" }, {
"_id" : 1, "host" : "192.168.100.90:10000" }, { "_id" : 2, "host" :
"192.168.100.110:10000" } ] } > rs.initiate(config_shard_a); { "info" : "Config
now saved locally. Should come online in about a minute.", "ok" : 1 } >
rs.status(); { "set" : "shard_a", "date" : ISODate("2015-03-17T10:16:03Z"),
"myState" : 1, "members" : [ { "_id" : 0, "name" : "192.168.110.71:10000",
"health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 179, "optime" : {
"t" : 1426587322000, "i" : 1 }, "optimeDate" : ISODate("2015-03-17T10:15:22Z"),
"electionTime" : { "t" : 1426587331000, "i" : 1 }, "electionDate" :
ISODate("2015-03-17T10:15:31Z"), "self" : true }, { "_id" : 1, "name" :
"192.168.100.90:10000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY",
"uptime" : 40, "optime" : { "t" : 1426587322000, "i" : 1 }, "optimeDate" :
ISODate("2015-03-17T10:15:22Z"), "lastHeartbeat" :
ISODate("2015-03-17T10:16:03Z"), "lastHeartbeatRecv" :
ISODate("2015-03-17T10:16:02Z"), "pingMs" : 0, "syncingTo" :
"192.168.110.71:10000" }, { "_id" : 2, "name" : "192.168.100.110:10000",
"health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 40, "optime" :
{ "t" : 1426587322000, "i" : 1 }, "optimeDate" :
ISODate("2015-03-17T10:15:22Z"), "lastHeartbeat" :
ISODate("2015-03-17T10:16:03Z"), "lastHeartbeatRecv" :
ISODate("2015-03-17T10:16:02Z"), "pingMs" : 0, "syncingTo" :
"192.168.110.71:10000" } ], "ok" : 1 }2. to configure shard_b
Sign in 192.168.100.90:10001
./bin/mongo 192.168.100.90:10001 MongoDB shell version: 2.0.6 connecting to:
192.168.110.71:10001/test > use admin; switched to db admin >
config_shard_b={_id:'shard_b',members:[{_id:0,
host:'192.168.110.71:10001'},{_id:1,
host:'192.168.100.90:10001'},{_id:2,host:'192.168.100.110:10001'}]}; { "_id" :
"shard_b", "members" : [ { "_id" : 0, "host" : "192.168.110.71:10001" }, {
"_id" : 1, "host" : "192.168.100.90:10001" }, { "_id" : 2, "host" :
"192.168.100.110:10001" } ] } > rs.initiate(config_shard_b); { "info" : "Config
now saved locally. Should come online in about a minute.", "ok" : 1 } >
rs.status(); { "set" : "shard_b", "date" : ISODate("2015-03-17T10:20:52Z"),
"myState" : 1, "members" : [ { "_id" : 0, "name" : "192.168.110.71:10001",
"health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 175, "optime" : {
"t" : 1426587595000, "i" : 1 }, "optimeDate" : ISODate("2015-03-17T10:19:55Z"),
"electionTime" : { "t" : 1426587604000, "i" : 1 }, "electionDate" :
ISODate("2015-03-17T10:20:04Z"), "self" : true }, { "_id" : 1, "name" :
"192.168.100.90:10001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY",
"uptime" : 56, "optime" : { "t" : 1426587595000, "i" : 1 }, "optimeDate" :
ISODate("2015-03-17T10:19:55Z"), "lastHeartbeat" :
ISODate("2015-03-17T10:20:51Z"), "lastHeartbeatRecv" :
ISODate("2015-03-17T10:20:51Z"), "pingMs" : 0, "syncingTo" :
"192.168.110.71:10001" }, { "_id" : 2, "name" : "192.168.100.110:10001",
"health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 56, "optime" :
{ "t" : 1426587595000, "i" : 1 }, "optimeDate" :
ISODate("2015-03-17T10:19:55Z"), "lastHeartbeat" :
ISODate("2015-03-17T10:20:51Z"), "lastHeartbeatRecv" :
ISODate("2015-03-17T10:20:51Z"), "pingMs" : 0, "syncingTo" :
"192.168.110.71:10001" } ], "ok" : 1 } Three , to configure Sharding
1. Start each machine configuration service
./bin/mongod -f conf/config.conf &
2. Start the routing service on each machine
 ./bin/mongos -f conf/mongos.conf &
3. Routing node configuration fragmentation
./bin/mongo 192.168.100.110:30000 MongoDB shell version: 2.0.6 connecting to:
192.168.110.71:30000/test mongos> use admin; switched to db admin mongos>
db.runCommand({addshard:"shard_a/192.168.100.90:10000,192.168.100.110:10000,192.168.110.71:10000",name:"shard_a"});
{ "shardAdded" : "shard_a", "ok" : 1 } mongos>
db.runCommand({addshard:"shard_b/192.168.100.90:10001,192.168.100.110:10001,192.168.110.71:10001",name:"shard_b"});
{ "shardAdded" : "shard_b", "ok" : 1 } mongos> db.adminCommand({listshards:1});
{ "shards" : [ { "_id" : "shard_a", "host" :
"shard_a/192.168.100.110:10000,192.168.100.90:10000,192.168.110.71:10000" }, {
"_id" : "shard_b", "host" :
"shard_b/192.168.100.110:10001,192.168.100.90:10001,192.168.110.71:10001" } ],
"ok" : 1 }4. Declare that the library and table should be partitioned
Add partitioned storage database :

mongos> db.runCommand({enablesharding:"test"}); { "ok" : 1 }

Set the name of the shard collection , And will Must be specified Shard Key:
mongos> db.runCommand({shardcollection:"test.user", key:{_id:'hashed'}}); {
"collectionsharded" : "test.user", "ok" : 1 }

see sharding state :

mongos> db.printShardingStatus(); --- Sharding Status --- sharding version: {
"_id" : 1, "version" : 4, "minCompatibleVersion" : 4, "currentVersion" : 5,
"clusterId" : ObjectId("550800a2be0c27329d8222b9") } shards: { "_id" :
"shard_a", "host" :
"shard_a/192.168.100.110:10000,192.168.100.90:10000,192.168.110.71:10000" } {
"_id" : "shard_b", "host" :
"shard_b/192.168.100.110:10001,192.168.100.90:10001,192.168.110.71:10001" }
databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } {
"_id" : "test", "partitioned" : true, "primary" : "shard_a" } test.user chunks:
shard_a 2 shard_b 2 { "_id" : { $minKey : 1 } } -->> { "_id" :
NumberLong("-4611686018427387902") } on : shard_a { "t" : 2000, "i" : 2 } {
"_id" : NumberLong("-4611686018427387902") } -->> { "_id" : NumberLong(0) } on
: shard_a { "t" : 2000, "i" : 3 } { "_id" : NumberLong(0) } -->> { "_id" :
NumberLong("4611686018427387902") } on : shard_b { "t" : 2000, "i" : 4 } {
"_id" : NumberLong("4611686018427387902") } -->> { "_id" : { $maxKey : 1 } } on
: shard_b { "t" : 2000, "i" : 5 } Four , test
Port connected to one of the machines 30000 Of mongos process , And switch to test database , To add test data
for(var i=1;i<=20000;i++)
db.user.insert({name:"test"+i,age:40,addr:"beijing"});
View results :
mongos> db.user.stats(); { "sharded" : true, "systemFlags" : 1, "userFlags" :
1, "ns" : "test.user", "count" : 20000, "numExtents" : 10, "size" : 2240000,
"storageSize" : 5586944, "totalIndexSize" : 1700608, "indexSizes" : { "_id_" :
670432, "_id_hashed" : 1030176 }, "avgObjSize" : 112, "nindexes" : 2, "nchunks"
: 4, "shards" : { "shard_a" : { "ns" : "test.user", "count" : 10035, "size" :
1123920, "avgObjSize" : 112, "storageSize" : 2793472, "numExtents" : 5,
"nindexes" : 2, "lastExtentSize" : 2097152, "paddingFactor" : 1, "systemFlags"
: 1, "userFlags" : 1, "totalIndexSize" : 850304, "indexSizes" : { "_id_" :
335216, "_id_hashed" : 515088 }, "ok" : 1 }, "shard_b" : { "ns" : "test.user",
"count" : 9965, "size" : 1116080, "avgObjSize" : 112, "storageSize" : 2793472,
"numExtents" : 5, "nindexes" : 2, "lastExtentSize" : 2097152, "paddingFactor" :
1, "systemFlags" : 0, "userFlags" : 1, "totalIndexSize" : 850304, "indexSizes"
: { "_id_" : 335216, "_id_hashed" : 515088 }, "ok" : 1 } }, "ok" : 1 }
From the result, we can see the data of partition allocation :
shard_a:10035
shard_b:9965

Technology
©2019-2020 Toolsou All rights reserved,
hive compress &&hdfs Merge small files I've been drinking soft water for three years ? What is the use of soft water and water softener 《 League of Heroes 》 Mobile game open public test : support iOS/Android Dual platform The situation of receiving and receiving multi-path VaR - Value at risk - Monte Carlo method - Pythonnumpy: The creation of multidimensional array uniapp Summary of page value transfer use C Language makes a very simple airplane game Docker Container data volume ,DockerfileJavaScript Medium Call and Apply