{"_id":"5bad49a0bc76090003055cfb","project":"5633ebff7e9e880d00af1a53","version":{"_id":"5a8fae0268264c001f20cc00","project":"5633ebff7e9e880d00af1a53","__v":4,"createdAt":"2018-02-23T06:00:34.961Z","releaseDate":"2018-02-23T06:00:34.961Z","categories":["5a8fae0268264c001f20cc01","5a8fae0268264c001f20cc02","5a8fae0368264c001f20cc03","5a8fae0368264c001f20cc04","5a8fae0368264c001f20cc05","5a8fae0368264c001f20cc06","5a8fae0368264c001f20cc07","5a8fae0368264c001f20cc08","5a8fae0368264c001f20cc09","5abaa7eb72d6dc0028a07bf3","5b8ee7842790f8000333f9ba","5b8ee8f244a21a00034b5cd9"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"2.0.0","version":"2.0"},"category":{"_id":"5a8fae0368264c001f20cc04","version":"5a8fae0268264c001f20cc00","project":"5633ebff7e9e880d00af1a53","__v":0,"sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-05-31T04:26:39.925Z","from_sync":false,"order":5,"slug":"versions","title":"Bonsai.io Platform"},"user":"5ab13dd42459e0004e761001","__v":0,"updates":[],"next":{"pages":[],"description":""},"createdAt":"2018-09-27T21:20:32.688Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":999,"body":"The new Bonsai Business Plans come with more options than we have ever provided before. It may seem intimidating to choose, but there are a few simple guidelines that make it easy. Bonsai Business Plans also don’t require annual contracts. If your index or traffic changes periodically, you can change plans whenever you wish.\n\nLet’s start by looking at the two plan types.\n[block:api-header]\n{\n  \"title\": \"Choosing a plan type\"\n}\n[/block]\nBusiness Plans offer two main types - Compute and Capacity. The difference is inherent in their names: if your use-case necessitates a lot of data written to disk but less traffic (perhaps only a few per hour), Capacity allows you to get more bang for your buck in raw disk. As a contrast, Compute is designed for those that need a setup that can withstand from high traffic load or query complexity.\n[block:api-header]\n{\n  \"title\": \"Planning for disk capacity\"\n}\n[/block]\nWhen you deploy an HA Elasticsearch cluster, you must provision enough disk for three things: \n1.Your primary data\n2. Your replica data, and \n3. The normal maintenance routines performed by Lucene, the underlying search engine behind Elasticsearch. \n\nNobody likes using a search engine that doesn’t work. Failing to account for any of these factors will result in performance degradation, a.k.a. the infamous yellow or red cluster. 😫 \n\nHow much primary data you can load into Elasticsearch, while still maintaining High Availability? This simple formula will help you calculate:\n\n((number of nodes - 1) * the capacity of a single node) * 0.8 = the amount of data that can be loaded in your cluster\n\nLet’s put this in a concrete example. A Business Capacity Large plan has a raw capacity of 150GB, with each of the three nodes contributing 50GB of disk. So the concrete numbers would be:\n\nnumber of nodes = 3\nper node capacity = 50GB\ntotal raw capacity = 3 * 50GB = 150GB\nUsable data = ((3-1) * 50GB) * 0.8 = 80GB\n\nThis means that if you have a total raw capacity of 150GB, you should only be planning to use 80GB of it for your search data usage. At first this seems like a huge gap in resources available versus resources usable (that’s a 53% drop!), but it’s a necessary plan to prevent getting paged at 3AM with red status clusters, poorly performing queries and/or data loss.\n[block:api-header]\n{\n  \"title\": \"This formula, explained\"\n}\n[/block]\nPlanning is key with distributed systems like Elasticsearch. When nodes inevitably go offline, it’s important to have replication in place for backup. The formula removes one node from your calculation (number of nodes - 1) so that your cluster will not lose any data. The additional failover nodes will maintain a green index status, even when a node goes offline for maintenance reasons. This ensures  enough capacity for your primary data and a replica. Multiplying the total by 0.8 buffers your capacity by 20%, which accounts for Lucene’s maintenance routines. \n[block:api-header]\n{\n  \"title\": \"Planning for computational requests and traffic\"\n}\n[/block]\nNow that we’ve covered raw capacity planning, let’s talk about handling traffic. Traffic and query computation strength maps to the size of the cluster. Larger clusters can handle a higher number of requests. You should consider three different numbers here:\n\n1.) How many search requests will you be doing in any given minute? \n2.) How many aggregations will you be doing in any given minute? \n3.) Lastly, how many bulk updates will you perform each minute? \n\nTo ensure optimal performance, all three numbers should fit under the values in the table below. \n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Aggregation Rate\",\n    \"h-1\": \"Bulk Insert Rate\",\n    \"h-2\": \"Search Rate\",\n    \"h-3\": \"Ideal Plan/Size\",\n    \"0-0\": \"< 25 / minute\",\n    \"1-0\": \"< 50 / minute\",\n    \"2-0\": \"< 100 / minute\",\n    \"0-1\": \"< 250 / minute\",\n    \"1-1\": \"< 500 / minute\",\n    \"2-1\": \"< 1000 / minute\",\n    \"0-2\": \"< 500 / minute\",\n    \"1-2\": \"< 1000 / minute\",\n    \"2-2\": \"< 2000 / minute\",\n    \"0-3\": \"Large\",\n    \"1-3\": \"XLarge\",\n    \"2-3\": \"2XLarge\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\nAs a note: these numbers are conservative by design. Once you are up and running, you’ll be able to use the metrics panel in the Bonsai application to see the how your searches are performing in real time and be able to make sizing decisions, up or down, based on real data.\n\nFor those of you that want to dig even deeper, you can read our very thorough version of [capacity planning](https://docs.bonsai.io/docs/capacity-planning) as well.\n[block:api-header]\n{\n  \"title\": \"Questions?\"\n}\n[/block]\nOur team has provisioned search engines that handle billions of requests each month. If you are still unsure about which plan is right for you, please [contact us](http://bonsai.io/contact) for a personalized consultation.","excerpt":"","slug":"sizing-your-business-cluster","type":"basic","title":"Sizing your Business Cluster"}

Sizing your Business Cluster


The new Bonsai Business Plans come with more options than we have ever provided before. It may seem intimidating to choose, but there are a few simple guidelines that make it easy. Bonsai Business Plans also don’t require annual contracts. If your index or traffic changes periodically, you can change plans whenever you wish. Let’s start by looking at the two plan types. [block:api-header] { "title": "Choosing a plan type" } [/block] Business Plans offer two main types - Compute and Capacity. The difference is inherent in their names: if your use-case necessitates a lot of data written to disk but less traffic (perhaps only a few per hour), Capacity allows you to get more bang for your buck in raw disk. As a contrast, Compute is designed for those that need a setup that can withstand from high traffic load or query complexity. [block:api-header] { "title": "Planning for disk capacity" } [/block] When you deploy an HA Elasticsearch cluster, you must provision enough disk for three things: 1.Your primary data 2. Your replica data, and 3. The normal maintenance routines performed by Lucene, the underlying search engine behind Elasticsearch. Nobody likes using a search engine that doesn’t work. Failing to account for any of these factors will result in performance degradation, a.k.a. the infamous yellow or red cluster. 😫 How much primary data you can load into Elasticsearch, while still maintaining High Availability? This simple formula will help you calculate: ((number of nodes - 1) * the capacity of a single node) * 0.8 = the amount of data that can be loaded in your cluster Let’s put this in a concrete example. A Business Capacity Large plan has a raw capacity of 150GB, with each of the three nodes contributing 50GB of disk. So the concrete numbers would be: number of nodes = 3 per node capacity = 50GB total raw capacity = 3 * 50GB = 150GB Usable data = ((3-1) * 50GB) * 0.8 = 80GB This means that if you have a total raw capacity of 150GB, you should only be planning to use 80GB of it for your search data usage. At first this seems like a huge gap in resources available versus resources usable (that’s a 53% drop!), but it’s a necessary plan to prevent getting paged at 3AM with red status clusters, poorly performing queries and/or data loss. [block:api-header] { "title": "This formula, explained" } [/block] Planning is key with distributed systems like Elasticsearch. When nodes inevitably go offline, it’s important to have replication in place for backup. The formula removes one node from your calculation (number of nodes - 1) so that your cluster will not lose any data. The additional failover nodes will maintain a green index status, even when a node goes offline for maintenance reasons. This ensures enough capacity for your primary data and a replica. Multiplying the total by 0.8 buffers your capacity by 20%, which accounts for Lucene’s maintenance routines. [block:api-header] { "title": "Planning for computational requests and traffic" } [/block] Now that we’ve covered raw capacity planning, let’s talk about handling traffic. Traffic and query computation strength maps to the size of the cluster. Larger clusters can handle a higher number of requests. You should consider three different numbers here: 1.) How many search requests will you be doing in any given minute? 2.) How many aggregations will you be doing in any given minute? 3.) Lastly, how many bulk updates will you perform each minute? To ensure optimal performance, all three numbers should fit under the values in the table below. [block:parameters] { "data": { "h-0": "Aggregation Rate", "h-1": "Bulk Insert Rate", "h-2": "Search Rate", "h-3": "Ideal Plan/Size", "0-0": "< 25 / minute", "1-0": "< 50 / minute", "2-0": "< 100 / minute", "0-1": "< 250 / minute", "1-1": "< 500 / minute", "2-1": "< 1000 / minute", "0-2": "< 500 / minute", "1-2": "< 1000 / minute", "2-2": "< 2000 / minute", "0-3": "Large", "1-3": "XLarge", "2-3": "2XLarge" }, "cols": 4, "rows": 3 } [/block] As a note: these numbers are conservative by design. Once you are up and running, you’ll be able to use the metrics panel in the Bonsai application to see the how your searches are performing in real time and be able to make sizing decisions, up or down, based on real data. For those of you that want to dig even deeper, you can read our very thorough version of [capacity planning](https://docs.bonsai.io/docs/capacity-planning) as well. [block:api-header] { "title": "Questions?" } [/block] Our team has provisioned search engines that handle billions of requests each month. If you are still unsure about which plan is right for you, please [contact us](http://bonsai.io/contact) for a personalized consultation.