您好,登錄后才能下訂單哦!
tables = db.getCollectionNames();
tables.forEach( function (item) {
stats=db.runCommand({collStats:item});
sizeGB = stats.storageSize/1024/1024/1024;
prettyGB = Math.round(sizeGB)+ 'GB';
print(item, prettyGB)
})
// primary
db.runCommand({compact:'flow_down_stream_info',force:true})
// secondary
db.runCommand({compact:'flow_down_stream_info'})
建議先在從庫上運行,觀察沒問題后再在primary上運行
有可能造成數據損壞
Just to clarify, please be careful about using repairDatabase on a replica set node. repairDatabase is meant to be used to salvage readable data i.e. after a disk corruption, so it can remove unreadable data and let MongoDB start in the face of disk corruption.
If this node has an undetected disk corruption and you run repairDatabase on it, this could lead into that particular node having a different data content vs. the other node as a result of repairDatabase. Since MongoDB assumes all nodes in a replica set contains identical data, this could lead to crashes and hard to diagnose problems. Due to its nature, this issue could stay dormant for a long time, and suddenly manifest itself with a vengeance, seemingly without any apparent reason.
WiredTiger will eventually reuse the empty spaces with new data, and the periodic checkpointing that WiredTiger does could potentially release space to the OS without any intervention on your part.
If you really need to give space back to the OS, then an initial sync is the safest choice if you have a replica set. On a standalone, dump/restore will achieve the same result. Otherwise, compact is the safer choice vs. repairDatabase. Please backup your data before doing any of these, since in my opinion this would qualify as a major maintenance
MongoDB / WiredTiger: reduce storage size after deleting properties from documents
repairDatabase命令對GridFS的庫不起作用
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。