Optimizing MongoDB performance involves multiple layers from hardware selection to query optimization. This guide covers essential techniques for improving MongoDB performance in production environments.
# In mongod.conf
storage:
wiredTiger:
engineConfig:
cacheSizeGB: 8 # Adjust based on available RAM
journalCompressor: snappy
collectionConfig:
blockCompressor: snappy # or zlib, zstd
indexConfig:
prefixCompression: true
# In mongod.conf
net:
maxIncomingConnections: 65536 # Adjust based on expected load
wireObjectCheck: true
// Single field index
db.collection.createIndex({ fieldName: 1 })
// Compound index (order matters!)
db.collection.createIndex({ field1: 1, field2: -1 })
// TTL index for time-based expiration
db.collection.createIndex({ "createdAt": 1 }, { expireAfterSeconds: 3600 })
// Text index for full-text search
db.collection.createIndex({ field: "text" })
db.collection.getIndexes() and db.collection.aggregate([{ $indexStats: {} }])Design queries that can be satisfied entirely by indexes:
// Create index
db.products.createIndex({ category: 1, price: 1 })
// Query that can be covered by the index
db.products.find(
{ category: "electronics", price: { $gte: 100 } },
{ _id: 0, category: 1, price: 1 }
)
// Use projections to return only needed fields
db.collection.find(query, { field1: 1, field2: 1 })
// Use sort() with limit() efficiently (ensure sort fields are indexed)
db.collection.find(query).sort({ date: -1 }).limit(10)
// Use aggregation pipeline for complex operations
db.collection.aggregate([
{ $match: { status: "active" } },
{ $group: { _id: "$category", count: { $sum: 1 } } }
])
Use explain() to analyze query performance:
// Analyze query execution
db.collection.find({ field: "value" }).explain("executionStats")
Look for:
totalDocsExamined vs totalDocsReturned ratio (should be close to 1)executionTimeMillis for performanceindexUsed in the winning planFor large datasets, consider sharding:
// Enable sharding for database
sh.enableSharding("databaseName")
// Shard collection
sh.shardCollection("databaseName.collectionName", { shardKey: 1 })
// Choose appropriate shard keys:
// - High cardinality
// - Low frequency
// - Non-monotonic values (avoid timestamp as sole shard key)
// Example Node.js driver configuration
const client = new MongoClient(uri, {
maxPoolSize: 20, // Maintain up to 20 socket connections
serverSelectionTimeoutMS: 5000, // Keep trying to send operations for 5 seconds
socketTimeoutMS: 45000, // Close sockets after 45 seconds of inactivity
});
Enable slow query logging:
# In mongod.conf
operationProfiling:
mode: slowOp
slowOpThresholdMs: 100 # Log operations slower than 100ms
rateLimit: 100 # Limit logging to 100 operations per second
Symptoms: High RAM consumption, swapping
Solutions:
Symptoms: High response times, timeouts
Solutions:
Symptoms: High disk utilization, slow operations
Solutions:
Symptoms: Connection timeouts, “too many connections” errors
Solutions:
# Using mongoperf for basic performance testing
mongoperf --threads 10 --conns 100 --time 60
// Time a specific operation
var start = new Date()
db.collection.find({field: "value"}).toArray()
var end = new Date()
print("Query took " + (end - start) + " ms")