{"_id":"568371947af9120d007ac34d","__v":20,"project":"5633ebff7e9e880d00af1a53","category":{"_id":"5633f072737ea01700ea329d","version":"5633ec007e9e880d00af1a56","__v":4,"pages":["5633fdb0fa71f30d00ba74e1","5637ce94aa96490d00a64f78","5637d7a34dbdd919001b27ab","56e8747747de1e170005945a"],"project":"5633ebff7e9e880d00af1a53","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2015-10-30T22:34:26.440Z","from_sync":false,"order":0,"slug":"early-project-setup","title":"Early Project Setup"},"parentDoc":null,"user":"5633ec9b35355017003ca3f2","version":{"_id":"5633ec007e9e880d00af1a56","project":"5633ebff7e9e880d00af1a53","__v":15,"createdAt":"2015-10-30T22:15:28.105Z","releaseDate":"2015-10-30T22:15:28.105Z","categories":["5633ec007e9e880d00af1a57","5633f072737ea01700ea329d","5637a37d0704070d00f06cf4","5637cf4e7ca5de0d00286aeb","564503082c74cf1900da48b4","564503cb7f1fff210078e70a","567af26cb56bac0d0019d87d","567afeb8802b2b17005ddea0","567aff47802b2b17005ddea1","567b0005802b2b17005ddea3","568adfffcbd4ca0d00aebf7e","56ba80078cf7c9210009673e","574d127f6f075519007da3d0","574fde60aef76a0e00840927","57a22ba6cd51b22d00f623a0"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2015-12-30T05:54:28.154Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":2,"body":"Many Elasticsearch clients will take care of creating an index for you. You should review your client’s documentation for more information on its index usage conventions. If you don’t know how many indexes your application needs, we recommend creating one index per application per environment, to correspond with your database.\n\nLet’s create an example index called `acme-production` from the command line with curl or httpie. Note that your application’s `BONSAI_URL` will contain a username and a password which is used to authenticate index creation, modification and deletion.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Important Note on Index Auto-Creation\",\n  \"body\": \"Many Elasticsearch installs default to allow indices to auto-create simply by indexing into a non-existing index. In accordance with Elasticsearch in production best practices, we've disabled this by default. However, for easier integration with kibana and tools such as logstash that may not support explicit index creation, we've white-listed popular time-series index names such as `logstash*`, `requests*`, `events*`, `.kibana*` and `kibana-int*` . So if you have indices that match these prefixes they will support auto-creation.\"\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"You may want to check out [httpie](https://github.com/jkbr/httpie) as a nice alternative to curl that provides syntax highlighting\",\n  \"title\": \"ProTip\"\n}\n[/block]\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \" $ curl -X POST http://user:password:::at:::redwood-12345.us-east-1.bonsai.io/acme-production\\n-----> {\\\"ok\\\":true,\\\"acknowledged\\\":true}\",\n      \"language\": \"curl\",\n      \"name\": \"bash\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Using your new index\"\n}\n[/block]\nLet’s insert a “Hello, world” test document to verify that your new index is available, and to highlight some basic Elasticsearch concepts.\n\nFirst and foremost, Elasticsearch stores and renders documents using JSON. You can use REST methods to create, update, fetch and destroy documents in an index. Every document should specify a `type`, and preferably an `id`. You you may specify these values with the `_id` and the `_type` keys, or Elasticsearch will infer them from the URL of its API endpoints (if you don't explicitly provide an id, Elasticsearch will create a random one for you).\n\nIn the following example, we use POST to add a simple document to the index which specifies a `_type` of `test` and an `_id` of `hello`. You should replace the sample URL in this document with your own index URL to follow along.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \" $ curl -XPOST http://user:password@redwood-12345/acme-production/test/hello -d '{\\\"title\\\":\\\"Hello world\\\"}'\\n-----> {\\n  \\\"ok\\\" : true,\\n  \\\"_index\\\" : \\\"acme-production\\\",\\n  \\\"_type\\\" : \\\"test\\\",\\n  \\\"_id\\\" : \\\"hello\\\",\\n  \\\"_version\\\" : 1\\n}\",\n      \"language\": \"curl\",\n      \"name\": \"bash\"\n    }\n  ]\n}\n[/block]\nElasticsearch will index the document you provide based on sensible defaults for later searching. You can also see some of the values you provided echoed back in the output, along with a few other default fields, which will be explained elsewhere.\n\nNext, you may view this document by issuing a GET request to the `_search` endpoint.\n[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"Note the `_source` key, which contains a copy of your original document. Elasticsearch makes an excellent general-purpose document store, although should never be used as a primary store.\"\n}\n[/block]\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"$curl -XGET 'http://user:password@redwood-12345/acme-production/test/hello'\\n-----> {\\n  \\\"_index\\\" : \\\"acme-production\\\",\\n  \\\"_type\\\" : \\\"test\\\",\\n  \\\"_id\\\" : \\\"hello\\\",\\n  \\\"_version\\\" : 1,\\n  \\\"exists\\\" : true,\\n  \\\"_source\\\" : { \\\"title\\\":\\\"hello world\\\" }\\n}\",\n      \"language\": \"curl\",\n      \"name\": \"bash\"\n    }\n  ]\n}\n[/block]\nTo learn more about about the operations supported by your index, you should read the [Elasticsearch Index API documentation](http://www.elasticsearch.org/guide/reference/api/index_.html). Note that some operations mentioned in the documentation (such as “Automatic Index Creation”) are restricted on Bonsai for technical reasons.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Local workstation setup\"\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"Bonsai deploys upgrades to the public servers about once a quarter. Generally this means we’re a version or two behind the official releases. There are a few reasons for not operating as a bleeding edge service, namely that it is difficult to do safely with so many customers running stable production applications. However, we can offer whatever version a user requires on a single tenant plan. If you have questions about what version Bonsai is currently running, please feel free to [send us an email](mailto:support@bonsai.io).\"\n}\n[/block]\njohn@omc.io\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"$ brew install elasticsearch\",\n      \"language\": \"curl\",\n      \"name\": \"bash\"\n    }\n  ]\n}\n[/block]\n##Windows\n\nWindows users can [download Elasticsearch](https://www.elastic.co/downloads/elasticsearch) as a ZIP file. Simply extract the contents of the ZIP file, and run `bin/elasticsearch.bat` to start up an instance. Note that you'll need Java installed and configured on your system in order for Elasticsearch to run properly.\n\nElasticsearch can also be run [as a service](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-service-win.html) in Windows.\n\n##Linux\n\nThere are many Linux distributions out there, so the exact method of getting Elasticsearch installed will vary. Generally, you can [download](https://www.elastic.co/downloads/elasticsearch) a tarball of Elasticsearch and extract the compressed contents to a folder. It should have all of the proper executable permissions set, so you can just run `bin/elasticsearch` to spin up an instance. Note that if you're managing Elasticsearch in Linux without a package manager, you'll need to ensure all the dependencies are met. Java 7+ is a hard requirement, and there may be others. YMMV.\n\n### Arch Linux\nSome distributions have preconfigured Elasticsearch binaries available through repositories. Arch Linux, for example, offers it through the [community repo](https://www.archlinux.org/packages/community/any/elasticsearch/), and can be easily installed via pacman:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"$ sudo pacman -Syu elasticsearch\",\n      \"language\": \"shell\",\n      \"name\": null\n    }\n  ]\n}\n[/block]\nThis package also comes with a systemd service file for starting/stopping Elasticsearch with `sudo systemctl <enable | start | restart| stop> elasticsearch.service`. \n\nOne caveat with Arch: packages are bleeding edge, which means updates are pushed out as they become available. Bonsai is *not* a bleeding edge service, so you'll need to be careful to version lock the Elasticsearch package to whatever version you're running on Bonsai. You may also need to edit the PKGBUILD and elasticsearch.install files to ensure you're running the same version locally and on Bonsai.\n\n### Ubuntu and Debian-flavors\nOther distros can use the DEB and RPM files that Elasticsearch offers on the [download](https://www.elastic.co/downloads/elasticsearch) page. Debian-based Linux distributions can use `dpkg` to install Elasticsearch (note that this doesn't handle configuring dependencies like Java):\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"# Update the package lists\\n$ sudo apt-get update\\n\\n# Make sure Java is installed and working:\\n$ java -version\\n\\n# If the version of Java shown is not 7+ (1.7+ if using OpenJDK),\\n# or it doesn't recognize java at all, you need to install it:\\n$ sudo apt-get install openjdk-7-jre\\n\\n# Download the DEB from Elasticsearch:\\n$ wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-X.Y.Z.deb\\n\\n# Install the DEB:\\n$ sudo dpkg -i elasticsearch-1.7.2.deb\",\n      \"language\": \"shell\"\n    }\n  ]\n}\n[/block]\nThis approach will install the configuration files to `/etc/elasticsearch/` and will add init scripts to `/etc/init.d/elasticsearch`.\n\n### Red Hat / Suse / Fedora / RPM\n\nElasticsearch does provide an RPM file for installing Elasticsearch on distros using rpm:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"# Download the package\\n$ wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-X.Y.Z.rpm\\n\\n# Install it\\n$ rpm -Uvh elasticsearch-X.Y.Z.rpm\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n`rpm` should handle all of the dependency checks as well, so it will tell you if there is something missing.","excerpt":"","slug":"creating-your-first-index","type":"basic","title":"Creating Your First Index"}

Creating Your First Index


Many Elasticsearch clients will take care of creating an index for you. You should review your client’s documentation for more information on its index usage conventions. If you don’t know how many indexes your application needs, we recommend creating one index per application per environment, to correspond with your database. Let’s create an example index called `acme-production` from the command line with curl or httpie. Note that your application’s `BONSAI_URL` will contain a username and a password which is used to authenticate index creation, modification and deletion. [block:callout] { "type": "warning", "title": "Important Note on Index Auto-Creation", "body": "Many Elasticsearch installs default to allow indices to auto-create simply by indexing into a non-existing index. In accordance with Elasticsearch in production best practices, we've disabled this by default. However, for easier integration with kibana and tools such as logstash that may not support explicit index creation, we've white-listed popular time-series index names such as `logstash*`, `requests*`, `events*`, `.kibana*` and `kibana-int*` . So if you have indices that match these prefixes they will support auto-creation." } [/block] [block:callout] { "type": "info", "body": "You may want to check out [httpie](https://github.com/jkbr/httpie) as a nice alternative to curl that provides syntax highlighting", "title": "ProTip" } [/block] [block:code] { "codes": [ { "code": " $ curl -X POST http://user:password@redwood-12345.us-east-1.bonsai.io/acme-production\n-----> {\"ok\":true,\"acknowledged\":true}", "language": "curl", "name": "bash" } ] } [/block] [block:api-header] { "type": "basic", "title": "Using your new index" } [/block] Let’s insert a “Hello, world” test document to verify that your new index is available, and to highlight some basic Elasticsearch concepts. First and foremost, Elasticsearch stores and renders documents using JSON. You can use REST methods to create, update, fetch and destroy documents in an index. Every document should specify a `type`, and preferably an `id`. You you may specify these values with the `_id` and the `_type` keys, or Elasticsearch will infer them from the URL of its API endpoints (if you don't explicitly provide an id, Elasticsearch will create a random one for you). In the following example, we use POST to add a simple document to the index which specifies a `_type` of `test` and an `_id` of `hello`. You should replace the sample URL in this document with your own index URL to follow along. [block:code] { "codes": [ { "code": " $ curl -XPOST http://user:password@redwood-12345/acme-production/test/hello -d '{\"title\":\"Hello world\"}'\n-----> {\n \"ok\" : true,\n \"_index\" : \"acme-production\",\n \"_type\" : \"test\",\n \"_id\" : \"hello\",\n \"_version\" : 1\n}", "language": "curl", "name": "bash" } ] } [/block] Elasticsearch will index the document you provide based on sensible defaults for later searching. You can also see some of the values you provided echoed back in the output, along with a few other default fields, which will be explained elsewhere. Next, you may view this document by issuing a GET request to the `_search` endpoint. [block:callout] { "type": "info", "body": "Note the `_source` key, which contains a copy of your original document. Elasticsearch makes an excellent general-purpose document store, although should never be used as a primary store." } [/block] [block:code] { "codes": [ { "code": "$curl -XGET 'http://user:password@redwood-12345/acme-production/test/hello'\n-----> {\n \"_index\" : \"acme-production\",\n \"_type\" : \"test\",\n \"_id\" : \"hello\",\n \"_version\" : 1,\n \"exists\" : true,\n \"_source\" : { \"title\":\"hello world\" }\n}", "language": "curl", "name": "bash" } ] } [/block] To learn more about about the operations supported by your index, you should read the [Elasticsearch Index API documentation](http://www.elasticsearch.org/guide/reference/api/index_.html). Note that some operations mentioned in the documentation (such as “Automatic Index Creation”) are restricted on Bonsai for technical reasons. [block:api-header] { "type": "basic", "title": "Local workstation setup" } [/block] [block:callout] { "type": "info", "body": "Bonsai deploys upgrades to the public servers about once a quarter. Generally this means we’re a version or two behind the official releases. There are a few reasons for not operating as a bleeding edge service, namely that it is difficult to do safely with so many customers running stable production applications. However, we can offer whatever version a user requires on a single tenant plan. If you have questions about what version Bonsai is currently running, please feel free to [send us an email](mailto:support@bonsai.io)." } [/block] john@omc.io [block:code] { "codes": [ { "code": "$ brew install elasticsearch", "language": "curl", "name": "bash" } ] } [/block] ##Windows Windows users can [download Elasticsearch](https://www.elastic.co/downloads/elasticsearch) as a ZIP file. Simply extract the contents of the ZIP file, and run `bin/elasticsearch.bat` to start up an instance. Note that you'll need Java installed and configured on your system in order for Elasticsearch to run properly. Elasticsearch can also be run [as a service](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-service-win.html) in Windows. ##Linux There are many Linux distributions out there, so the exact method of getting Elasticsearch installed will vary. Generally, you can [download](https://www.elastic.co/downloads/elasticsearch) a tarball of Elasticsearch and extract the compressed contents to a folder. It should have all of the proper executable permissions set, so you can just run `bin/elasticsearch` to spin up an instance. Note that if you're managing Elasticsearch in Linux without a package manager, you'll need to ensure all the dependencies are met. Java 7+ is a hard requirement, and there may be others. YMMV. ### Arch Linux Some distributions have preconfigured Elasticsearch binaries available through repositories. Arch Linux, for example, offers it through the [community repo](https://www.archlinux.org/packages/community/any/elasticsearch/), and can be easily installed via pacman: [block:code] { "codes": [ { "code": "$ sudo pacman -Syu elasticsearch", "language": "shell", "name": null } ] } [/block] This package also comes with a systemd service file for starting/stopping Elasticsearch with `sudo systemctl <enable | start | restart| stop> elasticsearch.service`. One caveat with Arch: packages are bleeding edge, which means updates are pushed out as they become available. Bonsai is *not* a bleeding edge service, so you'll need to be careful to version lock the Elasticsearch package to whatever version you're running on Bonsai. You may also need to edit the PKGBUILD and elasticsearch.install files to ensure you're running the same version locally and on Bonsai. ### Ubuntu and Debian-flavors Other distros can use the DEB and RPM files that Elasticsearch offers on the [download](https://www.elastic.co/downloads/elasticsearch) page. Debian-based Linux distributions can use `dpkg` to install Elasticsearch (note that this doesn't handle configuring dependencies like Java): [block:code] { "codes": [ { "code": "# Update the package lists\n$ sudo apt-get update\n\n# Make sure Java is installed and working:\n$ java -version\n\n# If the version of Java shown is not 7+ (1.7+ if using OpenJDK),\n# or it doesn't recognize java at all, you need to install it:\n$ sudo apt-get install openjdk-7-jre\n\n# Download the DEB from Elasticsearch:\n$ wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-X.Y.Z.deb\n\n# Install the DEB:\n$ sudo dpkg -i elasticsearch-1.7.2.deb", "language": "shell" } ] } [/block] This approach will install the configuration files to `/etc/elasticsearch/` and will add init scripts to `/etc/init.d/elasticsearch`. ### Red Hat / Suse / Fedora / RPM Elasticsearch does provide an RPM file for installing Elasticsearch on distros using rpm: [block:code] { "codes": [ { "code": "# Download the package\n$ wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-X.Y.Z.rpm\n\n# Install it\n$ rpm -Uvh elasticsearch-X.Y.Z.rpm", "language": "text" } ] } [/block] `rpm` should handle all of the dependency checks as well, so it will tell you if there is something missing.