{"_id":"5aeb5c1debd03900039286b5","project":"5633ebff7e9e880d00af1a53","version":{"_id":"5a8fae0268264c001f20cc00","project":"5633ebff7e9e880d00af1a53","__v":4,"createdAt":"2018-02-23T06:00:34.961Z","releaseDate":"2018-02-23T06:00:34.961Z","categories":["5a8fae0268264c001f20cc01","5a8fae0268264c001f20cc02","5a8fae0368264c001f20cc03","5a8fae0368264c001f20cc04","5a8fae0368264c001f20cc05","5a8fae0368264c001f20cc06","5a8fae0368264c001f20cc07","5a8fae0368264c001f20cc08","5a8fae0368264c001f20cc09","5abaa7eb72d6dc0028a07bf3","5b8ee7842790f8000333f9ba","5b8ee8f244a21a00034b5cd9"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"2.0.0","version":"2.0"},"category":{"_id":"5a8fae0368264c001f20cc07","version":"5a8fae0268264c001f20cc00","project":"5633ebff7e9e880d00af1a53","__v":0,"sync":{"url":"","isSync":false},"reference":false,"createdAt":"2015-11-12T21:22:16.300Z","from_sync":false,"order":8,"slug":"troubleshooting-common-errors","title":"Troubleshooting"},"user":"5637d336aa96490d00a64f81","githubsync":"","__v":0,"parentDoc":null,"updates":[],"next":{"pages":[],"description":""},"createdAt":"2018-05-03T18:59:41.768Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"[block:api-header]\n{\n  \"title\": \"HTTP 400: Bad Request\"\n}\n[/block]\nAn HTTP 400 Bad Request can be caused by a variety of problems. However, it is generally a client-side issue. An HTTP 400 implies the problem is not with Elasticsearch, but rather with the request to Elasticsearch.\n\nFor example, if you have a mapping that expects a number in a particular field, and then index a document with some other data type in that field, Elasticsearch will reject it with an HTTP 400:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"POST /myindex/mytype/1?pretty -d '{\\\"views\\\":0}'\\n{\\n  \\\"_index\\\" : \\\"myindex\\\",\\n  \\\"_type\\\" : \\\"mytype\\\",\\n  \\\"_id\\\" : \\\"1\\\",\\n  \\\"_version\\\" : 1,\\n  \\\"_shards\\\" : {\\n    \\\"total\\\" : 2,\\n    \\\"successful\\\" : 2,\\n    \\\"failed\\\" : 0\\n  },\\n  \\\"created\\\" : true\\n}\\n\\nGET /myindex/_mapping?pretty\\n{\\n  \\\"myindex\\\" : {\\n    \\\"mappings\\\" : {\\n      \\\"mytype\\\" : {\\n        \\\"properties\\\" : {\\n          \\\"views\\\" : {\\n            \\\"type\\\" : \\\"long\\\"\\n          }\\n        }\\n      }\\n    }\\n  }\\n}\\n\\nPOST /myindex/mytype/2?pretty -d '{\\\"views\\\":\\\"zero\\\"}'\\n{\\n  \\\"error\\\" : {\\n    \\\"root_cause\\\" : [ {\\n      \\\"type\\\" : \\\"mapper_parsing_exception\\\",\\n      \\\"reason\\\" : \\\"failed to parse [views]\\\"\\n    } ],\\n    \\\"type\\\" : \\\"mapper_parsing_exception\\\",\\n    \\\"reason\\\" : \\\"failed to parse [views]\\\",\\n    \\\"caused_by\\\" : {\\n      \\\"type\\\" : \\\"number_format_exception\\\",\\n      \\\"reason\\\" : \\\"For input string: \\\\\\\"zero\\\\\\\"\\\"\\n    }\\n  },\\n  \\\"status\\\" : 400\\n}\\n\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\nThe way to troubleshoot an HTTP 400 error is to read the response carefully and understand which part of the request is raising the exception. That will help you to identify a root cause and remediate. \n[block:api-header]\n{\n  \"title\": \"HTTP 401: Authorization Required\"\n}\n[/block]\nAll Bonsai clusters are provisioned with a randomly generated set of credentials. These must be supplied _with every request_ in order for the request to be processed. An HTTP 401 response indicates the authentication credentials were missing from the request. \n\nTo elaborate on this, all Bonsai cluster URLs follow this format:\n\n```\nhttps://username:password:::at:::hostname.region.bonsai.io\n``` \n\nThe username and password in this URL are not the credentials used for logging in to Bonsai, but are randomly generated alphanumeric strings. So your URL might look something like:\n\n```\nhttps://kjh4k3j:lv9pngn9fs@my-awesome-cluster.us-east-1.bonsai.io\n```\n\nThe credentials `kjh4k3j:lv9pngn9fs` **must** be present with all requests to the cluster in order for them to be processed. This is a security precaution to protect your data (on that note, we strongly recommend keeping your full URL a secret, as anyone with the credentials can view or modify your data).\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Not All APIs are Available\",\n  \"body\": \"It's possible to get an HTTP 401 response when attempting to access one of the [Unsupported API Endpoints](doc:bonsai-unsupported-actions). If you're trying to access server level tools, restart a node, etc, then the request will fail, period. Please read the linked documentation on unavailable APIs to determine whether the failing request is valid.\"\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"I'm including credentials and still getting a 401!\",\n  \"body\": \"Please ensure that the credentials are correct. You can find this information on your cluster dashboard. Note that there is a tool for both [Direct users](doc:managing-your-cluster#access) and [Heroku users](doc:bonsai-elasticsearch-dashboard#access) for rotating credentials. So it's entirely possible to be using an outdated set of credentials.\\n\\nHeroku users should also inspect the contents of the `BONSAI_URL` config variable. This can be found in the Heroku app dashboard, or by running `heroku config:get BONSAI_URL`. The contents of this variable should match the URL shown in the Bonsai cluster dashboard _exactly_.\\n\\nIf you're sure that the credentials are correct and being supplied, [send us an email](mailto:support@bonsai.io) and we will investigate.\"\n}\n[/block]\n\n[block:api-header]\n{\n  \"title\": \"HTTP 403: Cluster Asleep\"\n}\n[/block]\nHobby clusters are provided free of charge, which is especially helpful for students, hobbyists, developers, self-learners, etc. In order to keep this service free, these clusters must sleep for 8 hours out of every 24. You can read more about this in [Cluster Sleep](doc:sleeping-clusters).\n\nIf you need to have the cluster up 24/7, the solution is to upgrade to a paid plan. Even the cheapest paid plans on Bonsai do not have forced sleep. Upgrades take effect immediately. For more information on upgrading your plan, see the documentation for your account type:\n\n* [Changing Your Plan for Direct Users](doc:managing-your-cluster#manage)\n* [Changing Your Plan on Heroku](doc:changing-your-plan) \n* [Changing Your Plan on Manifold](doc:changing-your-manifold-plan) \n[block:api-header]\n{\n  \"title\": \"HTTP 403: Cluster Read-only\"\n}\n[/block]\nThis error is raised when an update request is sent to a cluster that has been placed into read-only mode. Clusters can be placed into read-only mode for one of several reasons, but the most common reason is due to an [overage](doc:metering-on-bonsai#overages). \n\nIf you're seeing this error, check on your cluster status and address any overages you see. You can find more information about this in our [Metering on Bonsai](doc:metering-on-bonsai) documentation, specifically [Checking on Cluster Status](doc:metering-on-bonsai#checking-on-cluster-status). If you're not seeing any overages and the cluster is still set to read-only, please [contact us](mailto:support@bonsai.io) and let us know.\n[block:api-header]\n{\n  \"title\": \"HTTP 403: Cluster Disabled\"\n}\n[/block]\nThis error is raised when a request is sent to a cluster that has been disabled. Clusters can be disabled for one of several reasons, but the most common reason is due to an [overage](doc:metering-on-bonsai#overages). \n\nIf you're seeing this error, check on your cluster status and address any overages you see. You can find more information about this in our [Metering on Bonsai](doc:metering-on-bonsai) documentation, specifically [Checking on Cluster Status](doc:metering-on-bonsai#checking-on-cluster-status). If you're not seeing any overages and the cluster is still disabled, please [contact us](mailto:support@bonsai.io) and let us know.\n[block:api-header]\n{\n  \"title\": \"HTTP 403: Maintenance\"\n}\n[/block]\nIn some rare cases, the Bonsai Ops Team will put a cluster into maintenance mode. There are a lot of reasons this may happen:\n\n* Load shedding\n* Data migrations\n* Rolling restarts\n* Version upgrades\n* ... and more.\n\nMaintenance mode blocks updates to the cluster, but not searches. If you're seeing this message, it will be temporary; it rarely lasts for more than a minute or two. If your cluster has been in a maintenance state for more than a few minutes, please [contact support](mailto:support@bonsai.io).\n[block:api-header]\n{\n  \"title\": \"HTTP 404: Cluster Not Found\"\n}\n[/block]\nThe \"Cluster not found\"-variant HTTP 404 is distinct from the \"Index not found\" message. This error message indicates that the routing layer is unable to match your URL to a cluster resource.  This can be caused by a few things:\n\n* **A typo in the URL.** If you're seeing this in the command line or terminal, then it's possible the hostname is wrong due to a typo or incomplete copy/paste.\n\n* **The cluster has been destroyed.** If you deprovision a cluster, it will be destroyed instantly. Further requests to the old URL will result in an HTTP 404 Cluster Not Found response.\n\n* **The cluster has not yet been provisioned.** There are a couple cases in which clusters take a few minutes to come online. Namely, provisioning a single tenant environment may take a few minutes to bring up and configure the server.\n\nIf you have confirmed that: A) the URL is correct, B) the cluster has not been destroyed, and C) the cluster _should_ be up and running, and D) you're still receiving HTTP 404 responses from the cluster, then [send us an email](mailto:support@bonsai.io) and we'll investigate.\n[block:api-header]\n{\n  \"title\": \"HTTP 404: Index Not Found\"\n}\n[/block]\nThis response is distinct from the \"Cluster not found\" message. This message indicates that you're trying to access an index that is not registered with Elasticsearch. For example:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"GET /nonexistent_index/_search?pretty\\n{\\n  \\\"error\\\" : {\\n    \\\"root_cause\\\" : [ {\\n      \\\"type\\\" : \\\"index_not_found_exception\\\",\\n      \\\"reason\\\" : \\\"no such index\\\",\\n      \\\"resource.type\\\" : \\\"index_or_alias\\\",\\n      \\\"resource.id\\\" : \\\"nonexistent_index\\\",\\n      \\\"index\\\" : \\\"nonexistent_index\\\"\\n    } ],\\n    \\\"type\\\" : \\\"index_not_found_exception\\\",\\n    \\\"reason\\\" : \\\"no such index\\\",\\n    \\\"resource.type\\\" : \\\"index_or_alias\\\",\\n    \\\"resource.id\\\" : \\\"nonexistent_index\\\",\\n    \\\"index\\\" : \\\"nonexistent_index\\\"\\n  },\\n  \\\"status\\\" : 404\\n}\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThere are a couple reasons you might see this:\n\n* **Race condition.** You or your app may be trying to access an index _before_ it was created.\n\n* **Typo.** You may have misspelled or only partially copy/pasted the name of the index you're trying to access.\n\n* **The index has been deleted.** Trying to access an index that has been deleted will return an HTTP 404 from Elasticsearch.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"body\": \"By default, Elasticsearch has a feature that will automatically create indices. Simply pushing data into a non-existing index will cause that index to be created with mappings inferred from the data. In accordance with Elasticsearch best practices for production applications, we've disabled this feature on Bonsai. \\n\\nHowever, some popular tools such as [Kibana](doc:using-kibana-with-bonsai) and  [Logstash](doc:using-logstash-with-bonsai) do not support explicit index creation, and rely on auto-creation being available. To accommodate these tools, we've whitelisted popular time-series index names such as `logstash*`, `requests*`, `events*`, `.kibana*` and `kibana-int*`.\",\n  \"title\": \"Important Note on Index Auto-Creation\"\n}\n[/block]\nThe solution to this error message is to confirm that the index name is correct. If so, make sure it is properly created (with all the mappings it needs), and try again.\n[block:api-header]\n{\n  \"title\": \"HTTP 413: Request Too Large\"\n}\n[/block]\nThis error indicates that the request body to Elasticsearch exceeded the limits of the Bonsai proxy layer. This can be caused by a few things:\n\n* **A request larger than 40MB.** Elasticsearch's Query DSL can be fairly verbose JSON, particularly when queries are complex. The 40MB cap is meant to be a safety mechanism to prevent runaway queries from overwhelming the routing layer, while still being an order of magnitude higher than 99.9% of request bodies.\n\n* **Indexing too many documents at once.** The Elasticsearch _bulk API allows applications to index groups of documents in a single request. Sending a single batch of millions of documents could easily trigger the HTTP 413 message.\n\n* **Lots of request headers.** Metadata about a request can be passed to Elasticsearch in the form of request headers. Bonsai allows up to 16KB for request headers; this should be enough for whatever CORS and content-type specification needs to occur. Note that the TLS and authentication headers in the request are not counted towards this limit.\n\n* **Indexing large files.** When Elasticsearch indexes a rich text file like a PDF or Word document, it converts the file into a Base64 string to compress it for transit. Still, it's possible that this string is longer than 40MB, which could trigger the HTTP 413 error.\n\nIf you're seeing this error, check that your queries are sane and not 40MB of flattened JSON. Ensure you're not explicitly sending lots of headers to your cluster. \n\nIf you're seeing this message during bulk indexing, then decrease your batch sizes by half and try again. Repeat until you can reindex without receiving an HTTP 413.\n\nFinally, if it is indeed a large file causing the problem, then the odds are good that metadata and media in the file are resulting in its huge size. You may need to use a some file editing tool to remove the media (images, movies, sounds) and possibly the metadata from the file and then try again. If the files are user-submitted, consider capping the file size your users are able to upload.\n[block:api-header]\n{\n  \"title\": \"HTTP 429: Too Many Requests\"\n}\n[/block]\nThe proximate cause of HTTP 429 errors occur is that an app has exceeded its [concurrent connection](doc:metering-on-bonsai#concurrent-connections) limits for too long. This is often due to a spike in usage -- perhaps a new feature as been deployed, or a service is growing quickly, or maybe there is a regression in the code. \n\nIt can also happen when reindexing (engineers want to push all the data into Elasticsearch as quickly as possible, which means lots of parallelization... right?). Unusually expensive requests, or other unusual latency and performance degradation within Elasticsearch itself can also cause unexpected queueing and result in 429 errors. \n\nIn most cases, 429 errors can be solved by upgrading your plan to a plan with higher connection limits; new connection limits are applied immediately. If that's not viable, then you may need to perform additional batching of your updates (e.g. queueing and bulk updating) or searches (e.g. with multi-search API). For more information on upgrading your plan, see the documentation for your account type:\n\n* [Changing Your Plan for Direct Users](doc:managing-your-cluster#manage)\n* [Changing Your Plan on Heroku](doc:changing-your-plan) \n* [Changing Your Plan on Manifold](doc:changing-your-manifold-plan) \n\nFinally, we have some suggestions for [optimizing your requests](doc:connection-management) that can help point your in the right direction. \n[block:api-header]\n{\n  \"title\": \"HTTP 500: Internal Server Error\"\n}\n[/block]\nThe HTTP 500 Internal Server Error is both rare and often difficult to reproduce. It generally indicates a problem with a server _somewhere_. It may be Elasticsearch, but it could also be a node in the load balancer or proxy. A process restarting is typically the root cause, which means it will often resolve itself within a few seconds.\n\nThe easiest solution is to simply catch and retry HTTP 500's. If you've seen this several times in a short period of time, please [send us an email](mailto:support@bonsai.io) and we will investigate.\n[block:api-header]\n{\n  \"title\": \"HTTP 501: Not Implemented\"\n}\n[/block]\nThe HTTP 501 Not Implemented error means that the requested feature is not available on Bonsai. Elasticsearch offers a handful of API endpoints that are not exposed on Bonsai for security and performance reasons. You can read more about these in the [Unsupported API Endpoints](doc:bonsai-unsupported-actions) documentation.\n[block:api-header]\n{\n  \"title\": \"HTTP 502: Bad Gateway\"\n}\n[/block]\nAn HTTP 502: Bad Gateway error is rare, but when it does happen it is commonly attributable to the load balancer. The short explanation is that there are a few cases where the proxy software hits an OOM error and is restarted. This causes the load balancer to send back an HTTP 502. The problem is transient and very hard to replicate because of the rareness.\n\nThe easiest solution is to simply catch and retry HTTP 502's. If you've seen this several times in a short period of time, please [send us an email](mailto:support@bonsai.io) and we will investigate.\n[block:api-header]\n{\n  \"title\": \"HTTP 503: Service Unavailable\"\n}\n[/block]\nAn HTTP 503: Service Unavailable error indicates a problem with a server somewhere in the network. It is most likely related to a node restart affecting your primary shard(s) before a replica can be promoted.\n\nThe easiest solution is to simply catch and retry HTTP 503's. If you've seen this several times in a short period of time, please [send us an email](mailto:support@bonsai.io) and we will investigate.\n[block:api-header]\n{\n  \"title\": \"HTTP 504: Gateway Timeout\"\n}\n[/block]\nThe HTTP 504 Gateway Timeout error is returned when a request takes longer than 60 seconds to process, regardless of whether the process is waiting on Elasticsearch or sitting in a connection queue. This can sometimes be due to network issues, and sometimes it can occur when Elasticsearch is IO-bound and unable to process requests quickly. Complex requests are more likely to receive an HTTP 504 error in these cases.\n\nFor more information on timeouts, please see our recommendations on [Connection Management](doc:connection-management).","excerpt":"","slug":"error-codes","type":"basic","title":"HTTP Error Codes"}
[block:api-header] { "title": "HTTP 400: Bad Request" } [/block] An HTTP 400 Bad Request can be caused by a variety of problems. However, it is generally a client-side issue. An HTTP 400 implies the problem is not with Elasticsearch, but rather with the request to Elasticsearch. For example, if you have a mapping that expects a number in a particular field, and then index a document with some other data type in that field, Elasticsearch will reject it with an HTTP 400: [block:code] { "codes": [ { "code": "POST /myindex/mytype/1?pretty -d '{\"views\":0}'\n{\n \"_index\" : \"myindex\",\n \"_type\" : \"mytype\",\n \"_id\" : \"1\",\n \"_version\" : 1,\n \"_shards\" : {\n \"total\" : 2,\n \"successful\" : 2,\n \"failed\" : 0\n },\n \"created\" : true\n}\n\nGET /myindex/_mapping?pretty\n{\n \"myindex\" : {\n \"mappings\" : {\n \"mytype\" : {\n \"properties\" : {\n \"views\" : {\n \"type\" : \"long\"\n }\n }\n }\n }\n }\n}\n\nPOST /myindex/mytype/2?pretty -d '{\"views\":\"zero\"}'\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"mapper_parsing_exception\",\n \"reason\" : \"failed to parse [views]\"\n } ],\n \"type\" : \"mapper_parsing_exception\",\n \"reason\" : \"failed to parse [views]\",\n \"caused_by\" : {\n \"type\" : \"number_format_exception\",\n \"reason\" : \"For input string: \\\"zero\\\"\"\n }\n },\n \"status\" : 400\n}\n", "language": "text" } ] } [/block] The way to troubleshoot an HTTP 400 error is to read the response carefully and understand which part of the request is raising the exception. That will help you to identify a root cause and remediate. [block:api-header] { "title": "HTTP 401: Authorization Required" } [/block] All Bonsai clusters are provisioned with a randomly generated set of credentials. These must be supplied _with every request_ in order for the request to be processed. An HTTP 401 response indicates the authentication credentials were missing from the request. To elaborate on this, all Bonsai cluster URLs follow this format: ``` https://username:password@hostname.region.bonsai.io ``` The username and password in this URL are not the credentials used for logging in to Bonsai, but are randomly generated alphanumeric strings. So your URL might look something like: ``` https://kjh4k3j:lv9pngn9fs@my-awesome-cluster.us-east-1.bonsai.io ``` The credentials `kjh4k3j:lv9pngn9fs` **must** be present with all requests to the cluster in order for them to be processed. This is a security precaution to protect your data (on that note, we strongly recommend keeping your full URL a secret, as anyone with the credentials can view or modify your data). [block:callout] { "type": "warning", "title": "Not All APIs are Available", "body": "It's possible to get an HTTP 401 response when attempting to access one of the [Unsupported API Endpoints](doc:bonsai-unsupported-actions). If you're trying to access server level tools, restart a node, etc, then the request will fail, period. Please read the linked documentation on unavailable APIs to determine whether the failing request is valid." } [/block] [block:callout] { "type": "info", "title": "I'm including credentials and still getting a 401!", "body": "Please ensure that the credentials are correct. You can find this information on your cluster dashboard. Note that there is a tool for both [Direct users](doc:managing-your-cluster#access) and [Heroku users](doc:bonsai-elasticsearch-dashboard#access) for rotating credentials. So it's entirely possible to be using an outdated set of credentials.\n\nHeroku users should also inspect the contents of the `BONSAI_URL` config variable. This can be found in the Heroku app dashboard, or by running `heroku config:get BONSAI_URL`. The contents of this variable should match the URL shown in the Bonsai cluster dashboard _exactly_.\n\nIf you're sure that the credentials are correct and being supplied, [send us an email](mailto:support@bonsai.io) and we will investigate." } [/block] [block:api-header] { "title": "HTTP 403: Cluster Asleep" } [/block] Hobby clusters are provided free of charge, which is especially helpful for students, hobbyists, developers, self-learners, etc. In order to keep this service free, these clusters must sleep for 8 hours out of every 24. You can read more about this in [Cluster Sleep](doc:sleeping-clusters). If you need to have the cluster up 24/7, the solution is to upgrade to a paid plan. Even the cheapest paid plans on Bonsai do not have forced sleep. Upgrades take effect immediately. For more information on upgrading your plan, see the documentation for your account type: * [Changing Your Plan for Direct Users](doc:managing-your-cluster#manage) * [Changing Your Plan on Heroku](doc:changing-your-plan) * [Changing Your Plan on Manifold](doc:changing-your-manifold-plan) [block:api-header] { "title": "HTTP 403: Cluster Read-only" } [/block] This error is raised when an update request is sent to a cluster that has been placed into read-only mode. Clusters can be placed into read-only mode for one of several reasons, but the most common reason is due to an [overage](doc:metering-on-bonsai#overages). If you're seeing this error, check on your cluster status and address any overages you see. You can find more information about this in our [Metering on Bonsai](doc:metering-on-bonsai) documentation, specifically [Checking on Cluster Status](doc:metering-on-bonsai#checking-on-cluster-status). If you're not seeing any overages and the cluster is still set to read-only, please [contact us](mailto:support@bonsai.io) and let us know. [block:api-header] { "title": "HTTP 403: Cluster Disabled" } [/block] This error is raised when a request is sent to a cluster that has been disabled. Clusters can be disabled for one of several reasons, but the most common reason is due to an [overage](doc:metering-on-bonsai#overages). If you're seeing this error, check on your cluster status and address any overages you see. You can find more information about this in our [Metering on Bonsai](doc:metering-on-bonsai) documentation, specifically [Checking on Cluster Status](doc:metering-on-bonsai#checking-on-cluster-status). If you're not seeing any overages and the cluster is still disabled, please [contact us](mailto:support@bonsai.io) and let us know. [block:api-header] { "title": "HTTP 403: Maintenance" } [/block] In some rare cases, the Bonsai Ops Team will put a cluster into maintenance mode. There are a lot of reasons this may happen: * Load shedding * Data migrations * Rolling restarts * Version upgrades * ... and more. Maintenance mode blocks updates to the cluster, but not searches. If you're seeing this message, it will be temporary; it rarely lasts for more than a minute or two. If your cluster has been in a maintenance state for more than a few minutes, please [contact support](mailto:support@bonsai.io). [block:api-header] { "title": "HTTP 404: Cluster Not Found" } [/block] The "Cluster not found"-variant HTTP 404 is distinct from the "Index not found" message. This error message indicates that the routing layer is unable to match your URL to a cluster resource. This can be caused by a few things: * **A typo in the URL.** If you're seeing this in the command line or terminal, then it's possible the hostname is wrong due to a typo or incomplete copy/paste. * **The cluster has been destroyed.** If you deprovision a cluster, it will be destroyed instantly. Further requests to the old URL will result in an HTTP 404 Cluster Not Found response. * **The cluster has not yet been provisioned.** There are a couple cases in which clusters take a few minutes to come online. Namely, provisioning a single tenant environment may take a few minutes to bring up and configure the server. If you have confirmed that: A) the URL is correct, B) the cluster has not been destroyed, and C) the cluster _should_ be up and running, and D) you're still receiving HTTP 404 responses from the cluster, then [send us an email](mailto:support@bonsai.io) and we'll investigate. [block:api-header] { "title": "HTTP 404: Index Not Found" } [/block] This response is distinct from the "Cluster not found" message. This message indicates that you're trying to access an index that is not registered with Elasticsearch. For example: [block:code] { "codes": [ { "code": "GET /nonexistent_index/_search?pretty\n{\n \"error\" : {\n \"root_cause\" : [ {\n \"type\" : \"index_not_found_exception\",\n \"reason\" : \"no such index\",\n \"resource.type\" : \"index_or_alias\",\n \"resource.id\" : \"nonexistent_index\",\n \"index\" : \"nonexistent_index\"\n } ],\n \"type\" : \"index_not_found_exception\",\n \"reason\" : \"no such index\",\n \"resource.type\" : \"index_or_alias\",\n \"resource.id\" : \"nonexistent_index\",\n \"index\" : \"nonexistent_index\"\n },\n \"status\" : 404\n}", "language": "curl" } ] } [/block] There are a couple reasons you might see this: * **Race condition.** You or your app may be trying to access an index _before_ it was created. * **Typo.** You may have misspelled or only partially copy/pasted the name of the index you're trying to access. * **The index has been deleted.** Trying to access an index that has been deleted will return an HTTP 404 from Elasticsearch. [block:callout] { "type": "warning", "body": "By default, Elasticsearch has a feature that will automatically create indices. Simply pushing data into a non-existing index will cause that index to be created with mappings inferred from the data. In accordance with Elasticsearch best practices for production applications, we've disabled this feature on Bonsai. \n\nHowever, some popular tools such as [Kibana](doc:using-kibana-with-bonsai) and [Logstash](doc:using-logstash-with-bonsai) do not support explicit index creation, and rely on auto-creation being available. To accommodate these tools, we've whitelisted popular time-series index names such as `logstash*`, `requests*`, `events*`, `.kibana*` and `kibana-int*`.", "title": "Important Note on Index Auto-Creation" } [/block] The solution to this error message is to confirm that the index name is correct. If so, make sure it is properly created (with all the mappings it needs), and try again. [block:api-header] { "title": "HTTP 413: Request Too Large" } [/block] This error indicates that the request body to Elasticsearch exceeded the limits of the Bonsai proxy layer. This can be caused by a few things: * **A request larger than 40MB.** Elasticsearch's Query DSL can be fairly verbose JSON, particularly when queries are complex. The 40MB cap is meant to be a safety mechanism to prevent runaway queries from overwhelming the routing layer, while still being an order of magnitude higher than 99.9% of request bodies. * **Indexing too many documents at once.** The Elasticsearch _bulk API allows applications to index groups of documents in a single request. Sending a single batch of millions of documents could easily trigger the HTTP 413 message. * **Lots of request headers.** Metadata about a request can be passed to Elasticsearch in the form of request headers. Bonsai allows up to 16KB for request headers; this should be enough for whatever CORS and content-type specification needs to occur. Note that the TLS and authentication headers in the request are not counted towards this limit. * **Indexing large files.** When Elasticsearch indexes a rich text file like a PDF or Word document, it converts the file into a Base64 string to compress it for transit. Still, it's possible that this string is longer than 40MB, which could trigger the HTTP 413 error. If you're seeing this error, check that your queries are sane and not 40MB of flattened JSON. Ensure you're not explicitly sending lots of headers to your cluster. If you're seeing this message during bulk indexing, then decrease your batch sizes by half and try again. Repeat until you can reindex without receiving an HTTP 413. Finally, if it is indeed a large file causing the problem, then the odds are good that metadata and media in the file are resulting in its huge size. You may need to use a some file editing tool to remove the media (images, movies, sounds) and possibly the metadata from the file and then try again. If the files are user-submitted, consider capping the file size your users are able to upload. [block:api-header] { "title": "HTTP 429: Too Many Requests" } [/block] The proximate cause of HTTP 429 errors occur is that an app has exceeded its [concurrent connection](doc:metering-on-bonsai#concurrent-connections) limits for too long. This is often due to a spike in usage -- perhaps a new feature as been deployed, or a service is growing quickly, or maybe there is a regression in the code. It can also happen when reindexing (engineers want to push all the data into Elasticsearch as quickly as possible, which means lots of parallelization... right?). Unusually expensive requests, or other unusual latency and performance degradation within Elasticsearch itself can also cause unexpected queueing and result in 429 errors. In most cases, 429 errors can be solved by upgrading your plan to a plan with higher connection limits; new connection limits are applied immediately. If that's not viable, then you may need to perform additional batching of your updates (e.g. queueing and bulk updating) or searches (e.g. with multi-search API). For more information on upgrading your plan, see the documentation for your account type: * [Changing Your Plan for Direct Users](doc:managing-your-cluster#manage) * [Changing Your Plan on Heroku](doc:changing-your-plan) * [Changing Your Plan on Manifold](doc:changing-your-manifold-plan) Finally, we have some suggestions for [optimizing your requests](doc:connection-management) that can help point your in the right direction. [block:api-header] { "title": "HTTP 500: Internal Server Error" } [/block] The HTTP 500 Internal Server Error is both rare and often difficult to reproduce. It generally indicates a problem with a server _somewhere_. It may be Elasticsearch, but it could also be a node in the load balancer or proxy. A process restarting is typically the root cause, which means it will often resolve itself within a few seconds. The easiest solution is to simply catch and retry HTTP 500's. If you've seen this several times in a short period of time, please [send us an email](mailto:support@bonsai.io) and we will investigate. [block:api-header] { "title": "HTTP 501: Not Implemented" } [/block] The HTTP 501 Not Implemented error means that the requested feature is not available on Bonsai. Elasticsearch offers a handful of API endpoints that are not exposed on Bonsai for security and performance reasons. You can read more about these in the [Unsupported API Endpoints](doc:bonsai-unsupported-actions) documentation. [block:api-header] { "title": "HTTP 502: Bad Gateway" } [/block] An HTTP 502: Bad Gateway error is rare, but when it does happen it is commonly attributable to the load balancer. The short explanation is that there are a few cases where the proxy software hits an OOM error and is restarted. This causes the load balancer to send back an HTTP 502. The problem is transient and very hard to replicate because of the rareness. The easiest solution is to simply catch and retry HTTP 502's. If you've seen this several times in a short period of time, please [send us an email](mailto:support@bonsai.io) and we will investigate. [block:api-header] { "title": "HTTP 503: Service Unavailable" } [/block] An HTTP 503: Service Unavailable error indicates a problem with a server somewhere in the network. It is most likely related to a node restart affecting your primary shard(s) before a replica can be promoted. The easiest solution is to simply catch and retry HTTP 503's. If you've seen this several times in a short period of time, please [send us an email](mailto:support@bonsai.io) and we will investigate. [block:api-header] { "title": "HTTP 504: Gateway Timeout" } [/block] The HTTP 504 Gateway Timeout error is returned when a request takes longer than 60 seconds to process, regardless of whether the process is waiting on Elasticsearch or sitting in a connection queue. This can sometimes be due to network issues, and sometimes it can occur when Elasticsearch is IO-bound and unable to process requests quickly. Complex requests are more likely to receive an HTTP 504 error in these cases. For more information on timeouts, please see our recommendations on [Connection Management](doc:connection-management).