mongo复制集+分片实现大量数据存储
发布时间:2023-09-25 15:47:37 所属栏目:系统 来源:
导读: MongoDB Auto-Sharding 解决了海量存储和动态扩容的问题,但离实际生产环境所需的高可
靠、高可用还有些距离,所以有了” Replica Sets + Sharding”的解决方案:
Shard:
使用 Rep
靠、高可用还有些距离,所以有了” Replica Sets + Sharding”的解决方案:
Shard:
使用 Rep
MongoDB Auto-Sharding 解决了海量存储和动态扩容的问题,但离实际生产环境所需的高可靠、高可用还有些距离,所以有了” Replica Sets + Sharding”的解决方案: Shard: 使用 Replica Sets,确保每个数据节点都具有备份、自动容错转移、自动恢复能力。 Config: 使用3 个配置服务器,确保元数据完整性 Route: 使用3 个路由进程,实现负载平衡,提高客户端接入性能 基本结构如下: 主机 IP 服务及端口 ServerA 192.168.10.150 mongod shard1_1:27017 mongod shard2_1:27018 mongod config1:20000 mongs1:30000 ServerB 192.168.10.151 mongod shard1_2:27017 mongod shard2_2:27018 mongod config2:20000 mongs2:30000 ServerC 192.168.10.154 mongod shard1_3:27017 mongod shard2_3:27018 mongod config3:20000 mongs3:30000 创建数据目录 serverA: mkdir -p /data/shardset/{shard1_1,shard2_1,config}/ serverB: mkdir -p /data/shardset/{shard1_2,shard2_2,config}/ serverC mkdir -p /data/shardset/{shard1_3,shard2_3,config}/ 配置复制集 配置shard1所用到的Replica Sets serverA: mongod --shardsvr --replSet shard1 --port 27017 --dbpath /data/shardset/shard1_1 --logpath /data/shardset/shard1_1/shard1_1.log --logappend --fork serverB: mongod --shardsvr --replSet shard1 --port 27017 --dbpath /data/shardset/shard1_2 --logpath /data/shardset/shard1_2/shard1_2.log --logappend --fork serverC: mongod --shardsvr --replSet shard1 --port 27017 --dbpath /data/shardset/shard1_3 --logpath /data/shardset/shard1_3/shard1_3.log --logappend --fork 用mongo 连接其中一台机器的27017 端口的mongod,初始化Replica Sets“shard1”,执行: [root@template ~]# mongo --port 27017 MongoDB shell version: 2.6.0 connecting to: 127.0.0.1:27017/test >config = {_id: 'shard1', members: [ {_id: 0, host: '192.168.10.150:27017'},{_id: 1, host: '192.168.10.151:27017'},{_id: 2, host: '192.168.10.154:27017'}]} { "_id" : "shard1", "members" : [ { "_id" : 0, "host" : "192.168.10.150:27017" }, { "_id" : 1, "host" : "192.168.10.151:27017" }, { "_id" : 2, "host" : "192.168.10.154:27017" } ] } > rs.initiate(config) { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } 配置shard2所用到的Replica Sets ServerA: mongod --shardsvr --replSet shard2 --port 27018 --dbpath /data/shardset/shard2_1 --logpath /data/shardset/shard2_1/shard2_1.log --logappend --fork ServerB: mongod --shardsvr --replSet shard2 --port 27018 --dbpath /data/shardset/shard2_2 --logpath /data/shardset/shard2_2/shard2_2.log --logappend --fork ServerC: mongod --shardsvr --replSet shard2 --port 27018 --dbpath /data/shardset/shard2_3 --logpath /data/shardset/shard2_3/shard2_3.log --logappend --fork 用mongo 连接其中一台机器的27018 端口的mongod,初始化Replica Sets “shard2”,执行: [root@template ~]# mongo --port 27018 MongoDB shell version: 2.6.0 connecting to: 127.0.0.1:27018/test >config = {_id: 'shard2', members: [ {_id: 0, host: '192.168.10.150:27018'},{_id: 1, host: '192.168.10.151:27018'},{_id: 2, host: '192.168.10.154:27018'}]} { "_id" : "shard2", "members" : [ { "_id" : 0, "host" : "192.168.10.150:27018" }, { "_id" : 1, "host" : "192.168.10.151:27018" }, { "_id" : 2, "host" : "192.168.10.154:27018" } ] } > rs.initiate(config) { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } 配置3 台Config Server 在Server A、B、C上执行: mongod --configsvr --dbpath /data/shardset/config --port 20000 --logpath/data/shardset/config/config.log --logappend --fork 配置3 台Route Process 在Server A、B、C上执行: mongos --configdb 192.168.10.150:20000,192.168.10.151:20000,192.168.10.154:20000 --port 30000 --chunkSize 1 --logpath /data/shardset/mongos.log --logappend --fork 配置Shard Cluster 连接到其中一台机器的端口30000 的mongos 进程,并切换到admin 数据库做以下配置: [root@template ~]# mongo --port 30000 MongoDB shell version: 2.6.0 connecting to: 127.0.0.1:30000/test Welcome to the MongoDB shell. For interactive help, type "help". For more comprehensive documentation, see http://docs.mongodb.org/ Questions? Try the support group http://groups.google.com/group/mongodb-user mongos> mongos> use admin switched to db admin mongos> db.runCommand({addshard:"shard1/192.168.10.150:27017,192.168.10.151:27017,192.168.10.154:27017"}); { "shardAdded" : "shard1", "ok" : 1 } mongos> db.runCommand({addshard:"shard2/192.168.10.150:27018,192.168.10.151:27018,192.168.10.154:27018"}); { "shardAdded" : "shard2", "ok" : 1 } mongos> 激活数据库及集合的分片 mongos> db.runCommand({ enablesharding:"test" }) { "ok" : 1 } mongos> db.runCommand({ shardcollection: "test.users", key: { _id:1 }}) { "collectionsharded" : "test.users", "ok" : 1 } mongos> 查看配置 mongos> use admin switched to db admin mongos> db.runCommand({listshards: 1}) { "shards" : [ { "_id" : "shard1", "host" : "shard1/192.168.10.150:27017,192.168.10.151:27017,192.168.10.154:27017" }, { "_id" : "shard2", "host" : "shard2/192.168.10.150:27018,192.168.10.151:27018,192.168.10.154:27018" } ], "ok" : 1 } 验证Sharding正常工作 mongos> use test switched to db test mongos> for(var i=1;i<=200000;i++) db.users.insert({id:i,addr_1:"Beijing",addr_2:"Shanghai"}); WriteResult({ "nInserted" : 1 }) mongos> db.users.stats() { "sharded" : true, "systemFlags" : 1, "userFlags" : 1, "ns" : "test.users", "count" : 200000, "numExtents" : 13, "size" : 22400000, "storageSize" : 33689600, "totalIndexSize" : 6908720, "indexSizes" : { "_id_" : 6908720 }, "avgObjSize" : 112, "nindexes" : 1, "nchunks" : 13, "shards" : { "shard1" : { "ns" : "test.users", "count" : 147600, "size" : 16531200, "avgObjSize" : 112, "storageSize" : 22507520, "numExtents" : 7, "nindexes" : 1, "lastExtentSize" : 11325440, "paddingFactor" : 1, "systemFlags" : 1, "userFlags" : 1, "totalIndexSize" : 4807488, "indexSizes" : { "_id_" : 4807488 }, "ok" : 1 }, "shard2" : { "ns" : "test.users", "count" : 52400, "size" : 5868800, "avgObjSize" : 112, "storageSize" : 11182080, "numExtents" : 6, "nindexes" : 1, "lastExtentSize" : 8388608, "paddingFactor" : 1, "systemFlags" : 1, "userFlags" : 1, "totalIndexSize" : 2101232, "indexSizes" : { "_id_" : 2101232 }, (编辑:聊城站长网) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |
推荐文章
站长推荐