One of the best features of Elasticsearch is that it comes with support for clustering built in. However, because of this, when you set up an index, you're faced with an often intimidating question: just how many shards/replicas should my index have? Unfortunately, there's no one definitive answer to this question, but I'll give some guidance that should help finding an answer for your particular setup a bit easier.
What is a shard, anyways?
Under the hood, Elasticsearch uses Lucene. Based on the way indexes work, you can't actually split an index up to distribute it across nodes in a cluster. Elasticsearch skirts around this limitation by creating multiple Lucene indexes, or shards. Simply, a shard is a Lucene index.
This has an important effect on performance. Since the Elasticsearch index is distributed across multiple Lucene indexes, in order to run a complete query, Elasticsearch must first query each Lucene index, or shard, individually, combine the results, and finally score the overall result. That means using any number of shards greater than 1 will automatically incur a performance hit. However, it's a tradeoff for the gain of being able to distribute the index across multiple nodes. The performance hit can also be mitigated somewhat (more on that later).
How many shards should my index have?
To answer that, we need to talk about nodes. A node is simply an Elasticsearch server. A cluster is composed of one or more nodes. How many nodes you should have is a separate question, but that number is directly related to the number of shards your index should use.
Generally speaking, you'll receive the optimal performance by using the same number of shards as nodes. In a three node setup, then, your index should have three shards. However, Elasticsearch indexes have an important limitation in that they cannot be "resharded" (changing the number of shards), without also reindexing. Should you decide later that you want your three node setup to have four nodes, instead, and you only used three shards, you'll have to reindex in order to add that additional shard.
For some that may not be a problem. For others, it's huge. Consider StackOverflow, which uses Elasticsearch to power their search. At the time of writing, their Elasticsearch index is currently 203 GB. Reindexing 203 GB of data is a very big deal. To allow future horizontal scale, without reindexing, you'd actually need to use more shards than the number of nodes, which means that each node would house multiple shards.
Having multiple shards on a single node introduces another set of performance considerations. As discussed earlier, in order to run a complete query, Elasticsearch must run the query on each shard individually. In a one-shard-per-node setup, all those queries can run in parallel, because there's only one shard on each node. With multiple shards on the node, the queries for those shards have to be run serially. The effect of this can be mitigated if the hard drive the shards are stored on is relatively fast like a 15,000 RPM enterprise-class drive or, even better, an SSD. Still, it's something to be aware of. Even on nodes with SSD drives, I wouldn't recommend having over two shards per node. A three node setup with six shards (two shards per node) would let you horizontally scale up to 6 nodes in the future, which should be more than sufficient for all but the most extreme scenarios.
What about replicas?
Replicas are just shards that aren't actively used. They're copies of your active shards to allow better availability should one of the nodes in your cluster fall down. Here, it's all about failover, so the number of replicas you should have is related to the number of shards you have and the number of nodes those shards reside on. Unlike shards, though, the number of replicas can be changed at a later point, so it's not as important that you get it right at the start.
Choosing the right number of replicas is probably best explained via an example. Consider a three node setup housing an index with three shards. That means one active shard for the index will reside on each node. Now, what happens if a node falls down. As you can see in the image below, Shard 3 is missing. That's where replicas come in.
So what happens if we add 1 replica for each shard. Now, we can lose one node and still have a complete index across the two remaining nodes. As the image below shows, Elasticsearch notices that it's missing an active Shard 3, so it activates Replica 3 promoting it to Shard 3. Since there's only two nodes, now, a new Replica 3 is created and added to Node 1 and a new Replica 1 is created and added to Node 2. At this point all active shards are present and each has a single replica. If Node 2, then, should fall down, as well. Replicas 2 and 3 on Node 1 are activated and new
replicas for each active shard are recreated on Node 1.
While the recovery seems pretty graceful, note that enough time was allowed between the failure of Node 3 and Node 2 to recreate all the necessary replicas. If Node 2 and 3 dropped simultaneously, you'd still be in trouble, because Node 1, initially, did not have a complete set of replicas. Adding 2 replicas to each shard would give each a full set to work with should one or more other nodes fall down.
While three nodes and two replicas seems like the safest bet here, bear in mind that you're also doubling the storage requirements. Each replica will be roughly a third of the size of the index in this scenario, which is significant if your index approaches the size of something like StackOverflow's 203 GB. Ultimately, storage is relatively cheap and a small price to pay for the availability of your index, but it's still a factor to consider.