{"_id":"56df78581fc20d190018d3f4","version":{"_id":"5633ec007e9e880d00af1a56","project":"5633ebff7e9e880d00af1a53","__v":15,"createdAt":"2015-10-30T22:15:28.105Z","releaseDate":"2015-10-30T22:15:28.105Z","categories":["5633ec007e9e880d00af1a57","5633f072737ea01700ea329d","5637a37d0704070d00f06cf4","5637cf4e7ca5de0d00286aeb","564503082c74cf1900da48b4","564503cb7f1fff210078e70a","567af26cb56bac0d0019d87d","567afeb8802b2b17005ddea0","567aff47802b2b17005ddea1","567b0005802b2b17005ddea3","568adfffcbd4ca0d00aebf7e","56ba80078cf7c9210009673e","574d127f6f075519007da3d0","574fde60aef76a0e00840927","57a22ba6cd51b22d00f623a0"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"category":{"_id":"574d127f6f075519007da3d0","project":"5633ebff7e9e880d00af1a53","__v":0,"version":"5633ec007e9e880d00af1a56","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-05-31T04:26:39.925Z","from_sync":false,"order":3,"slug":"versions","title":"Versions"},"parentDoc":null,"__v":7,"user":"5637d336aa96490d00a64f81","project":"5633ebff7e9e880d00af1a53","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-03-09T01:11:52.880Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":5,"body":"Your privacy and safety are of the utmost importance to us. Within our shared environments, we restrict access to certain endpoints for security reasons. This page catalogs which of those endpoints are not supported with a brief description of why. While having many of the following endpoints can be helpful for custom and dedicated tenant builds, the majority of search builders don't need them. If, however, you find yourself stuck without one of these available, please [email us](mailto:support:::at:::bonsai.io) and we'll be happy to help.\n\n\n[block:api-header]\n{\n  \"title\": \"_all and wildcard destructive actions\"\n}\n[/block]\nWildcard delete actions are usually for clusters with a large number of indices, but we find that the majority of use-cases don't have a high number of indices. Moreover, removing the ability to sweepingly delete everything causes us to slow down and identify exactly what we're deleting, reducing the risk of accidental and permanent data loss. Formerly we enabled wildcard and _all destructive actions on shared tier clusters. When we started getting an increasingly large number of threads of distressed developers that accidentally deleted their entire clusters, we decided to remove wildcards. \n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"DELETE /*\\nDELETE /_all\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Node hot threads\"\n}\n[/block]\nA Java thread that uses lots of CPU and runs for an unusually long period of time is known as a _hot thread_. Elasticsearch provides an API to get the current hot threads on each node in the cluster. This information can be useful in forming a holistic picture of potential problems within the cluster.  We don't support these endpoints on our shared tier to ensure user activity isn't exposed to others.\n\nIf you think there is a problem with your cluster that you need help troubleshooting, please [email support](mailto:support@bonsai.io).\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"    GET /_cluster/nodes/hotthreads\\n    GET /_cluster/nodes/hot_threads\\n    GET /_cluster/nodes/{nodeId}/hotthreads\\n    GET /_cluster/nodes/{nodeId}/hot_threads\\n    GET /_nodes/hotthreads\\n    GET /_nodes/hot_threads\\n    GET /_nodes/{nodeId}/hotthreads\\n    GET /_nodes/{nodeId}/hot_threads\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Node Shutdown & Restart\"\n}\n[/block]\nElasticsearch provides an API for shutting down and restarting nodes. This functionality is unsupported on our shared tier to prevent a user from shutting down a node or set of nodes that may be shared resources with another user. That action would have an adverse affect on other users which is why it is unsupported.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"    POST /_cluster/nodes/_restart\\n    POST /_cluster/nodes/_shutdown\\n    POST /_cluster/nodes/{nodeId}/_restart\\n    POST /_cluster/nodes/{nodeId}/_shutdown\\n    POST /_shutdown\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Snapshots\"\n}\n[/block]\nThe Snapshot API allows users to create and restore snapshots of indices and cluster data. It's useful as a backup tool and for recovering from problems or data loss. In practice, there is some fragility with the underlying data copying methods that can cause data corruption in certain situations. On Bonsai, we're already taking regular snapshots and monitoring cluster states for problems, and we block this endpoint to reduce the likelihood of data loss. \n\nIf you feel that you need a snapshot taken/restored, please reach out to our [support team](mailto:support@bonsai.io). For our single tenant plans, we can enable this feature.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"    GET    /_snapshot/_status\\n    DELETE /_snapshot/{repository}\\n    POST   /_snapshot/{repository}\\n    PUT    /_snapshot/{repository}\\n    GET    /_snapshot/{repository}/_status\\n    DELETE /_snapshot/{repository}/{snapshot}\\n    GET    /_snapshot/{repository}/{snapshot}\\n    POST   /_snapshot/{repository}/{snapshot}\\n    PUT    /_snapshot/{repository}/{snapshot}\\n    POST   /_snapshot/{repository}/{snapshot}/_create\\n    PUT    /_snapshot/{repository}/{snapshot}/_create\\n    POST   /_snapshot/{repository}/{snapshot}/_restore\\n    GET    /_snapshot/{repository}/{snapshot}/_status\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Reindex\"\n}\n[/block]\nThe most basic form of _reindex copies documents from one index to another. For example, this will copy documents from an index called `books` into another index, like `new_books`. It's still an experimental tool, and not supported on Bonsai at this time. One workaround is to set up an indexing script that indexes the same documents from your primary data store into the new index.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"    POST /_reindex\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\nIn the place of `POST _reindex` you can use the scan & scroll API. For example, I could run a `GET /my_index/_search?search_type=scan&scroll=1m`, then `POST` the retrieved docs into a new index. In actuality, this is fairly similar to what the Reindex API is itself doing under the hood (see: [elasticsearch/index/reindex/ReindexAction.java](https://github.com/elastic/elasticsearch/blob/master/modules/reindex/src/main/java/org/elasticsearch/index/reindex/ReindexAction.java)), so while it may be a little simpler to use, you're not necessarily missing out on any core functionality.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Cluster Shard Reroute\"\n}\n[/block]\nElasticsearch provides an API to move shards around between nodes within a cluster. We don't support this functionality on our shared plans for a few reasons. For one it interferes with our cluster management tooling, and there is a possibility for one or more users to allocate shards in a way that overloads a node.\n\nIf you need fine-grain control over the shard allocation within a cluster, please [reach out to us](mailto:support@bonsai.io) and we can discuss your use case and look at whether single tenancy would be a good fit for you.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"    POST /_cluster/reroute\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Cluster settings\"\n}\n[/block]\nElasticsearch provides an API to apply cluster-wide settings. We don't support this in our shared environment for safety reasons. In an environment where system resources are shared, this API would affect all users simultaneously. So one user could affect the behavior of everyone's cluster in ways that those users may not want. Instead, we block this API and remain opinionated about cluster settings.\n\nIf you need to change the system settings for your cluster, you'll need to be in a single tenant environment. [Reach out to us](mailto:support@bonsai.io) and let's talk through it.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"    PUT /_cluster/settings\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Index Optimize\"\n}\n[/block]\nLucene (the underlying search software that powers Elasticsearch) stores data across an array of files known as segment files. As new data is created, Lucene creates new files. The overhead of managing multiple files and performing binary search across the segments is trivial compared to the overhead of constantly updating and re-sorting a small number of files.\n\nLucene will periodically merge the segment files when certain criteria are met; this speeds up search by reducing the amount of data to be parsed. Elasticsearch provides an API to force this process to happen on demand, which is useful in certain situations (like when data is filling up a node).\n\nWe don't support this on our shared clusters for a couple of reasons. For one, it is extremely expensive in terms of system resources, and there would be nothing stopping a user from optimizing their indices on every update or every 60s, etc. A single user could adversely impact every user sharing resources. Optimizing an index is also a blocking operation, which means it could interfere with our internal cluster management tools.\n\nIf you feel like your cluster needs this ability, [hit us up](mailto:support@bonsai.io) and let's chat about it.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"    GET  /_optimize\\n    POST /_optimize\\n    GET  /{index}/_optimize\\n    POST /{index}/_optimize\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Index Search Warmers\"\n}\n[/block]\nElasticsearch provides a mechanism to speed up searches prior to being run. It does this by basically pre-populating caches via automatically running search requests. This is called \"warming\" the data, and it's typically done against searches that require heavy system resources.\n\nWe don't support this on shared clusters for stability reasons. Essentially there isn't a great way to throttle the impact of an arbitrary number of warmers. There is a possibility that a user could overwhelm the system resources by creating a large number of \"heavy\" warmers (aggregations, sorts on large lists, etc). It's also somewhat of an anti-pattern in a multitenant environment.\n\nIf this is something critical to your app, you would need to be on a dedicated cluster. Please [reach out to us](mailto:support@bonsai.io) if you have any further questions on this.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"    POST /_warmer/{name}\\n    PUT  /_warmer/{name}\\n    POST /_warmers/{name}\\n    PUT  /_warmers/{name}\\n    POST /{index}/_warmer/{name}\\n    PUT  /{index}/_warmer/{name}\\n    POST /{index}/_warmers/{name}\\n    PUT  /{index}/_warmers/{name}\\n    POST /{index}/{type}/_warmer/{name}\\n    PUT  /{index}/{type}/_warmer/{name}\\n    POST /{index}/{type}/_warmers/{name}\\n    PUT  /{index}/{type}/_warmers/{name}\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Scripts\"\n}\n[/block]\nElasticsearch provides an API for adding and modifying static scripts to a cluster. We don't support this on shared clusters, both for security reasons as well as the fact that we're opinionated about scripting language. Due to some pretty serious vulnerabilities in the Groovy scripting language, we default to Lucene Expressions.\n\nIf you need to configure static scripts or using a language other than Expressions, [let us know](mailto:support@bonsai.io) and we can get you set up with single tenancy.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"    DELETE /_scripts/{lang}/{id}\\n    GET    /_scripts/{lang}/{id}\\n    POST   /_scripts/{lang}/{id}\\n    PUT    /_scripts/{lang}/{id}\\n    POST   /_scripts/{lang}/{id}/_create\\n    PUT    /_scripts/{lang}/{id}/_create\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]","excerpt":"","slug":"bonsai-unsupported-actions","type":"basic","title":"Supported API Endpoints"}

Supported API Endpoints


Your privacy and safety are of the utmost importance to us. Within our shared environments, we restrict access to certain endpoints for security reasons. This page catalogs which of those endpoints are not supported with a brief description of why. While having many of the following endpoints can be helpful for custom and dedicated tenant builds, the majority of search builders don't need them. If, however, you find yourself stuck without one of these available, please [email us](mailto:support@bonsai.io) and we'll be happy to help. [block:api-header] { "title": "_all and wildcard destructive actions" } [/block] Wildcard delete actions are usually for clusters with a large number of indices, but we find that the majority of use-cases don't have a high number of indices. Moreover, removing the ability to sweepingly delete everything causes us to slow down and identify exactly what we're deleting, reducing the risk of accidental and permanent data loss. Formerly we enabled wildcard and _all destructive actions on shared tier clusters. When we started getting an increasingly large number of threads of distressed developers that accidentally deleted their entire clusters, we decided to remove wildcards. [block:code] { "codes": [ { "code": "DELETE /*\nDELETE /_all", "language": "text" } ] } [/block] [block:api-header] { "type": "basic", "title": "Node hot threads" } [/block] A Java thread that uses lots of CPU and runs for an unusually long period of time is known as a _hot thread_. Elasticsearch provides an API to get the current hot threads on each node in the cluster. This information can be useful in forming a holistic picture of potential problems within the cluster. We don't support these endpoints on our shared tier to ensure user activity isn't exposed to others. If you think there is a problem with your cluster that you need help troubleshooting, please [email support](mailto:support@bonsai.io). [block:code] { "codes": [ { "code": " GET /_cluster/nodes/hotthreads\n GET /_cluster/nodes/hot_threads\n GET /_cluster/nodes/{nodeId}/hotthreads\n GET /_cluster/nodes/{nodeId}/hot_threads\n GET /_nodes/hotthreads\n GET /_nodes/hot_threads\n GET /_nodes/{nodeId}/hotthreads\n GET /_nodes/{nodeId}/hot_threads", "language": "text" } ] } [/block] [block:api-header] { "type": "basic", "title": "Node Shutdown & Restart" } [/block] Elasticsearch provides an API for shutting down and restarting nodes. This functionality is unsupported on our shared tier to prevent a user from shutting down a node or set of nodes that may be shared resources with another user. That action would have an adverse affect on other users which is why it is unsupported. [block:code] { "codes": [ { "code": " POST /_cluster/nodes/_restart\n POST /_cluster/nodes/_shutdown\n POST /_cluster/nodes/{nodeId}/_restart\n POST /_cluster/nodes/{nodeId}/_shutdown\n POST /_shutdown", "language": "text" } ] } [/block] [block:api-header] { "type": "basic", "title": "Snapshots" } [/block] The Snapshot API allows users to create and restore snapshots of indices and cluster data. It's useful as a backup tool and for recovering from problems or data loss. In practice, there is some fragility with the underlying data copying methods that can cause data corruption in certain situations. On Bonsai, we're already taking regular snapshots and monitoring cluster states for problems, and we block this endpoint to reduce the likelihood of data loss. If you feel that you need a snapshot taken/restored, please reach out to our [support team](mailto:support@bonsai.io). For our single tenant plans, we can enable this feature. [block:code] { "codes": [ { "code": " GET /_snapshot/_status\n DELETE /_snapshot/{repository}\n POST /_snapshot/{repository}\n PUT /_snapshot/{repository}\n GET /_snapshot/{repository}/_status\n DELETE /_snapshot/{repository}/{snapshot}\n GET /_snapshot/{repository}/{snapshot}\n POST /_snapshot/{repository}/{snapshot}\n PUT /_snapshot/{repository}/{snapshot}\n POST /_snapshot/{repository}/{snapshot}/_create\n PUT /_snapshot/{repository}/{snapshot}/_create\n POST /_snapshot/{repository}/{snapshot}/_restore\n GET /_snapshot/{repository}/{snapshot}/_status", "language": "text" } ] } [/block] [block:api-header] { "type": "basic", "title": "Reindex" } [/block] The most basic form of _reindex copies documents from one index to another. For example, this will copy documents from an index called `books` into another index, like `new_books`. It's still an experimental tool, and not supported on Bonsai at this time. One workaround is to set up an indexing script that indexes the same documents from your primary data store into the new index. [block:code] { "codes": [ { "code": " POST /_reindex", "language": "text" } ] } [/block] In the place of `POST _reindex` you can use the scan & scroll API. For example, I could run a `GET /my_index/_search?search_type=scan&scroll=1m`, then `POST` the retrieved docs into a new index. In actuality, this is fairly similar to what the Reindex API is itself doing under the hood (see: [elasticsearch/index/reindex/ReindexAction.java](https://github.com/elastic/elasticsearch/blob/master/modules/reindex/src/main/java/org/elasticsearch/index/reindex/ReindexAction.java)), so while it may be a little simpler to use, you're not necessarily missing out on any core functionality. [block:api-header] { "type": "basic", "title": "Cluster Shard Reroute" } [/block] Elasticsearch provides an API to move shards around between nodes within a cluster. We don't support this functionality on our shared plans for a few reasons. For one it interferes with our cluster management tooling, and there is a possibility for one or more users to allocate shards in a way that overloads a node. If you need fine-grain control over the shard allocation within a cluster, please [reach out to us](mailto:support@bonsai.io) and we can discuss your use case and look at whether single tenancy would be a good fit for you. [block:code] { "codes": [ { "code": " POST /_cluster/reroute", "language": "text" } ] } [/block] [block:api-header] { "type": "basic", "title": "Cluster settings" } [/block] Elasticsearch provides an API to apply cluster-wide settings. We don't support this in our shared environment for safety reasons. In an environment where system resources are shared, this API would affect all users simultaneously. So one user could affect the behavior of everyone's cluster in ways that those users may not want. Instead, we block this API and remain opinionated about cluster settings. If you need to change the system settings for your cluster, you'll need to be in a single tenant environment. [Reach out to us](mailto:support@bonsai.io) and let's talk through it. [block:code] { "codes": [ { "code": " PUT /_cluster/settings", "language": "text" } ] } [/block] [block:api-header] { "type": "basic", "title": "Index Optimize" } [/block] Lucene (the underlying search software that powers Elasticsearch) stores data across an array of files known as segment files. As new data is created, Lucene creates new files. The overhead of managing multiple files and performing binary search across the segments is trivial compared to the overhead of constantly updating and re-sorting a small number of files. Lucene will periodically merge the segment files when certain criteria are met; this speeds up search by reducing the amount of data to be parsed. Elasticsearch provides an API to force this process to happen on demand, which is useful in certain situations (like when data is filling up a node). We don't support this on our shared clusters for a couple of reasons. For one, it is extremely expensive in terms of system resources, and there would be nothing stopping a user from optimizing their indices on every update or every 60s, etc. A single user could adversely impact every user sharing resources. Optimizing an index is also a blocking operation, which means it could interfere with our internal cluster management tools. If you feel like your cluster needs this ability, [hit us up](mailto:support@bonsai.io) and let's chat about it. [block:code] { "codes": [ { "code": " GET /_optimize\n POST /_optimize\n GET /{index}/_optimize\n POST /{index}/_optimize", "language": "text" } ] } [/block] [block:api-header] { "type": "basic", "title": "Index Search Warmers" } [/block] Elasticsearch provides a mechanism to speed up searches prior to being run. It does this by basically pre-populating caches via automatically running search requests. This is called "warming" the data, and it's typically done against searches that require heavy system resources. We don't support this on shared clusters for stability reasons. Essentially there isn't a great way to throttle the impact of an arbitrary number of warmers. There is a possibility that a user could overwhelm the system resources by creating a large number of "heavy" warmers (aggregations, sorts on large lists, etc). It's also somewhat of an anti-pattern in a multitenant environment. If this is something critical to your app, you would need to be on a dedicated cluster. Please [reach out to us](mailto:support@bonsai.io) if you have any further questions on this. [block:code] { "codes": [ { "code": " POST /_warmer/{name}\n PUT /_warmer/{name}\n POST /_warmers/{name}\n PUT /_warmers/{name}\n POST /{index}/_warmer/{name}\n PUT /{index}/_warmer/{name}\n POST /{index}/_warmers/{name}\n PUT /{index}/_warmers/{name}\n POST /{index}/{type}/_warmer/{name}\n PUT /{index}/{type}/_warmer/{name}\n POST /{index}/{type}/_warmers/{name}\n PUT /{index}/{type}/_warmers/{name}", "language": "text" } ] } [/block] [block:api-header] { "type": "basic", "title": "Scripts" } [/block] Elasticsearch provides an API for adding and modifying static scripts to a cluster. We don't support this on shared clusters, both for security reasons as well as the fact that we're opinionated about scripting language. Due to some pretty serious vulnerabilities in the Groovy scripting language, we default to Lucene Expressions. If you need to configure static scripts or using a language other than Expressions, [let us know](mailto:support@bonsai.io) and we can get you set up with single tenancy. [block:code] { "codes": [ { "code": " DELETE /_scripts/{lang}/{id}\n GET /_scripts/{lang}/{id}\n POST /_scripts/{lang}/{id}\n PUT /_scripts/{lang}/{id}\n POST /_scripts/{lang}/{id}/_create\n PUT /_scripts/{lang}/{id}/_create", "language": "text" } ] } [/block]