Repository: incubator-zeppelin Updated Branches: refs/heads/master 4c269e6d8 -> 7e9028329
Add an Elasticsearch interpreter ### Elasticsearch Interpreter Interpreter for querying ElasticSearch . Supported requests are "get document by id" , "search documents" , "delete by id" , "count documents" and "index / update a document". Supported versions of Elasticsearch : >= 2.1 Author: Bruno Bonnin <bbon...@gmail.com> Author: Bruno Bonnin <bruno.bon...@myscript.com> Closes #520 from bbonnin/master and squashes the following commits: e40c06d [Bruno Bonnin] Remove duplicate dependency license (same groupid/artifactid) 98822df [Bruno Bonnin] Merge branch 'master' of https://github.com/apache/incubator-zeppelin 32ea103 [Bruno Bonnin] Update elasticsearch.md e8a9ff2 [Bruno Bonnin] Merge branch 'master' of https://github.com/bbonnin/incubator-zeppelin 34a39a9 [Bruno Bonnin] Update tests with new search format a3cf78c [Bruno Bonnin] Update elasticsearch.md af319e6 [Bruno Bonnin] Update snapshots ce7c15f [Bruno Bonnin] Update search command (use of query_string) + size config 6b6886b [Bruno Bonnin] Update doc with query DSL 2dfd129 [Bruno Bonnin] Check client before starting process (to get a better error msg) 7bf4232 [Bruno Bonnin] Doc : count command with a query 93253df [Bruno Bonnin] Count command with a query d3b599c [Bruno Bonnin] Fix typo 25383db [Bruno Bonnin] Update doc (config, shield, completion) 3e1655d [Bruno Bonnin] Update elasticsearch.md ee1547a [Bruno Bonnin] Ugly table for flattened data 46899d6 [Bruno Bonnin] Doc: flattened json and security 4044169 [Bruno Bonnin] Add completion 31c73b5 [Bruno Bonnin] Update tests c68b3df [Bruno Bonnin] Update of LICENSE 455a072 [Bruno Bonnin] Fix pb from remarks of the PR a10a5ec [Bruno Bonnin] Elasticsearch Interpreter fdc413f [Bruno Bonnin] Elasticsearch Interpreter Project: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/repo Commit: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/commit/7e902832 Tree: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/tree/7e902832 Diff: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/diff/7e902832 Branch: refs/heads/master Commit: 7e9028329e61ce3401ba39643bd1ca7ac3021c89 Parents: 4c269e6 Author: Bruno Bonnin <bbon...@gmail.com> Authored: Mon Dec 21 10:19:01 2015 +0100 Committer: Alexander Bezzubov <b...@apache.org> Committed: Tue Dec 22 19:36:55 2015 +0900 ---------------------------------------------------------------------- conf/zeppelin-site.xml.template | 2 +- .../img/docs-img/elasticsearch-config.png | Bin 0 -> 68876 bytes .../docs-img/elasticsearch-count-with-query.png | Bin 0 -> 55824 bytes .../img/docs-img/elasticsearch-count.png | Bin 0 -> 59770 bytes .../zeppelin/img/docs-img/elasticsearch-get.png | Bin 0 -> 76122 bytes .../img/docs-img/elasticsearch-query-string.png | Bin 0 -> 116111 bytes .../elasticsearch-search-json-query-table.png | Bin 0 -> 57479 bytes .../img/docs-img/elasticsearch-search-pie.png | Bin 0 -> 48215 bytes .../img/docs-img/elasticsearch-search-table.png | Bin 0 -> 263452 bytes docs/interpreter/elasticsearch.md | 228 +++++++++ elasticsearch/pom.xml | 147 ++++++ .../elasticsearch/ElasticsearchInterpreter.java | 465 +++++++++++++++++++ .../ElasticsearchInterpreterTest.java | 171 +++++++ pom.xml | 1 + zeppelin-distribution/src/bin_license/LICENSE | 33 ++ .../zeppelin/conf/ZeppelinConfiguration.java | 3 +- 16 files changed, 1048 insertions(+), 2 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/conf/zeppelin-site.xml.template ---------------------------------------------------------------------- diff --git a/conf/zeppelin-site.xml.template b/conf/zeppelin-site.xml.template index 78d7f1e..b6aca75 100755 --- a/conf/zeppelin-site.xml.template +++ b/conf/zeppelin-site.xml.template @@ -105,7 +105,7 @@ <property> <name>zeppelin.interpreters</name> - <value>org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.angular.AngularInterpreter,org.apache.zeppelin.shell.ShellInterpreter,org.apache.zeppelin.hive.HiveInterpreter,org.apache.zeppelin.tajo.TajoInterpreter,org.apache.zeppelin.flink.FlinkInterpreter,org.apache.zeppelin.lens.LensInterpreter,org.apache.zeppelin.ignite.IgniteInterpreter,org.apache.zeppelin.ignite.IgniteSqlInterpreter,org.apache.zeppelin.cassandra.CassandraInterpreter,org.apache.zeppelin.geode.GeodeOqlInterpreter,org.apache.zeppelin.postgresql.PostgreSqlInterpreter,org.apache.zeppelin.phoenix.PhoenixInterpreter,org.apache.zeppelin.kylin.KylinInterpreter</value> + <value>org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.angular.AngularInterpreter,org.apache.zeppelin.shell.ShellInterpreter,org.apache.zeppelin.hive.HiveInterpreter,org.apache.zeppelin.tajo.TajoInterpreter,org.apache.zeppelin.flink.FlinkInterpreter,org.apache.zeppelin.lens.LensInterpreter,org.apache.zeppelin.ignite.IgniteInterpreter,org.apache.zeppelin.ignite.IgniteSqlInterpreter,org.apache.zeppelin.cassandra.CassandraInterpreter,org.apache.zeppelin.geode.GeodeOqlInterpreter,org.apache.zeppelin.postgresql.PostgreSqlInterpreter,org.apache.zeppelin.phoenix.PhoenixInterpreter,org.apache.zeppelin.kylin.KylinInterpreter,org.apache.zeppelin.elasticsearch.ElasticsearchInterpreter</value> <description>Comma separated interpreter configurations. First interpreter become a default</description> </property> http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-config.png ---------------------------------------------------------------------- diff --git a/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-config.png b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-config.png new file mode 100644 index 0000000..ce68836 Binary files /dev/null and b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-config.png differ http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-count-with-query.png ---------------------------------------------------------------------- diff --git a/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-count-with-query.png b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-count-with-query.png new file mode 100755 index 0000000..ca2f940 Binary files /dev/null and b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-count-with-query.png differ http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-count.png ---------------------------------------------------------------------- diff --git a/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-count.png b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-count.png new file mode 100644 index 0000000..7514e63 Binary files /dev/null and b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-count.png differ http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-get.png ---------------------------------------------------------------------- diff --git a/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-get.png b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-get.png new file mode 100644 index 0000000..a3b4107 Binary files /dev/null and b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-get.png differ http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-query-string.png ---------------------------------------------------------------------- diff --git a/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-query-string.png b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-query-string.png new file mode 100755 index 0000000..c351e8e Binary files /dev/null and b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-query-string.png differ http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-search-json-query-table.png ---------------------------------------------------------------------- diff --git a/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-search-json-query-table.png b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-search-json-query-table.png new file mode 100755 index 0000000..bcec2e5 Binary files /dev/null and b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-search-json-query-table.png differ http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-search-pie.png ---------------------------------------------------------------------- diff --git a/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-search-pie.png b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-search-pie.png new file mode 100644 index 0000000..81b711b Binary files /dev/null and b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-search-pie.png differ http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-search-table.png ---------------------------------------------------------------------- diff --git a/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-search-table.png b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-search-table.png new file mode 100644 index 0000000..fba9966 Binary files /dev/null and b/docs/assets/themes/zeppelin/img/docs-img/elasticsearch-search-table.png differ http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/docs/interpreter/elasticsearch.md ---------------------------------------------------------------------- diff --git a/docs/interpreter/elasticsearch.md b/docs/interpreter/elasticsearch.md new file mode 100644 index 0000000..fb83799 --- /dev/null +++ b/docs/interpreter/elasticsearch.md @@ -0,0 +1,228 @@ +--- +layout: page +title: "Elasticsearch Interpreter" +description: "" +group: manual +--- +{% include JB/setup %} + + +## Elasticsearch Interpreter for Apache Zeppelin + +### 1. Configuration + +<br/> +<table class="table-configuration"> + <tr> + <th>Property</th> + <th>Default</th> + <th>Description</th> + </tr> + <tr> + <td>elasticsearch.cluster.name</td> + <td>elasticsearch</td> + <td>Cluster name</td> + </tr> + <tr> + <td>elasticsearch.host</td> + <td>localhost</td> + <td>Host of a node in the cluster</td> + </tr> + <tr> + <td>elasticsearch.port</td> + <td>9300</td> + <td>Connection port <b>(important: this is not the HTTP port, but the transport port)</b></td> + </tr> + <tr> + <td>elasticsearch.result.size</td> + <td>10</td> + <td>The size of the result set of a search query</td> + </tr> +</table> + +<center> +  +</center> + + +> Note #1: you can add more properties to configure the Elasticsearch client. + +> Note #2: if you use Shield, you can add a property named `shield.user` with a value containing the name and the password (format: `username:password`). For more details about Shield configuration, consult the [Shield reference guide](https://www.elastic.co/guide/en/shield/current/_using_elasticsearch_java_clients_with_shield.html). Do not forget, to copy the shield client jar in the interpreter directory (`ZEPPELIN_HOME/interpreters/elasticsearch`). + + +<hr/> + +### 2. Enabling the Elasticsearch Interpreter + +In a notebook, to enable the **Elasticsearch** interpreter, click the **Gear** icon and select **Elasticsearch**. + + +<hr/> + + +### 3. Using the Elasticsearch Interpreter + +In a paragraph, use `%elasticsearch` to select the Elasticsearch interpreter and then input all commands. To get the list of available commands, use `help`. + +```bash +| %elasticsearch +| help +Elasticsearch interpreter: +General format: <command> /<indices>/<types>/<id> <option> <JSON> + - indices: list of indices separated by commas (depends on the command) + - types: list of document types separated by commas (depends on the command) +Commands: + - search /indices/types <query> + . indices and types can be omitted (at least, you have to provide '/') + . a query is either a JSON-formatted query, nor a lucene query + - size <value> + . defines the size of the result set (default value is in the config) + . if used, this command must be declared before a search command + - count /indices/types <query> + . same comments as for the search + - get /index/type/id + - delete /index/type/id + - index /ndex/type/id <json-formatted document> + . the id can be omitted, elasticsearch will generate one +``` + +> Tip: use (CTRL + .) for completion + + +#### get +With the `get` command, you can find a document by id. The result is a JSON document. + +```bash +| %elasticsearch +| get /index/type/id +``` + +Example: + + + +#### search +With the `search` command, you can send a search query to Elasticsearch. There are two formats of query: +* You can provide a JSON-formatted query, that is exactly what you provide when you use the REST API of Elasticsearch. + * See [Elasticsearch search API reference document](https://www.elastic.co/guide/en/elasticsearch/reference/current/search.html) for more details about the content of the search queries. +* You can also provide the content of a `query_string` + * This is a shortcut to a query like that: `{ "query": { "query_string": { "query": "__HERE YOUR QUERY__", "analyze_wildcard": true } } }` + * See [Elasticsearch query string syntax](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#query-string-syntax) for more details about the content of such a query. + +```bash +| %elasticsearch +| search /index1,index2,.../type1,type2,... <JSON document containing the query or query_string elements> +``` + +If you want to modify the size of the result set, you can add a line that is setting the size, before your search command. + +```bash +| %elasticsearch +| size 50 +| search /index1,index2,.../type1,type2,... <JSON document containing the query or query_string elements> +``` + + +Examples: +* With a JSON query: +```bash +| %elasticsearch +| search / { "query": { "match_all": {} } } + +| %elasticsearch +| search /logs { "query": { "query_string": { "query": "request.method:GET AND status:200" } } } +``` + +* With query_string elements: +```bash +| %elasticsearch +| search /logs request.method:GET AND status:200 + +| %elasticsearch +| search /logs (404 AND (POST OR DELETE)) +``` + +> **Important**: a document in Elasticsearch is a JSON document, so it is hierarchical, not flat as a row in a SQL table. +For the Elastic interpreter, the result of a search query is flattened. + +Suppose we have a JSON document: +```json +{ + "date": "2015-12-08T21:03:13.588Z", + "request": { + "method": "GET", + "url": "/zeppelin/4cd001cd-c517-4fa9-b8e5-a06b8f4056c4", + "headers": [ "Accept: *.*", "Host: apache.org"] + }, + "status": "403" +} +``` + +The data will be flattened like this: + +date | request.headers[0] | request.headers[1] | request.method | request.url | status +-----|--------------------|--------------------|----------------|-------------|------- +2015-12-08T21:03:13.588Z | Accept: \*.\* | Host: apache.org | GET | /zeppelin/4cd001cd-c517-4fa9-b8e5-a06b8f4056c4 | 403 + + +Examples: +* With a table containing the results: + + + +* You can also use a predefined diagram: + + +* With a JSON query: + + +* With a query string: + + + +#### count +With the `count` command, you can count documents available in some indices and types. You can also provide a query. + +```bash +| %elasticsearch +| count /index1,index2,.../type1,type2,... <JSON document containing the query OR a query string> +``` + +Examples: +* Without query: + + +* With a query: + + + +#### index +With the `index` command, you can insert/update a document in Elasticsearch. +```bash +| %elasticsearch +| index /index/type/id <JSON document> + +| %elasticsearch +| index /index/type <JSON document> +``` + +#### delete +With the `delete` command, you can delete a document. + +```bash +| %elasticsearch +| delete /index/type/id +``` + + + +#### Apply Zeppelin Dynamic Forms + +You can leverage [Zeppelin Dynamic Form]({{BASE_PATH}}/manual/dynamicform.html) inside your queries. You can use both the `text input` and `select form` parameterization features + +```bash +%elasticsearch +size ${limit=10} +search /index/type { "query": { "match_all": {} } } +``` + http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/elasticsearch/pom.xml ---------------------------------------------------------------------- diff --git a/elasticsearch/pom.xml b/elasticsearch/pom.xml new file mode 100644 index 0000000..3da4441 --- /dev/null +++ b/elasticsearch/pom.xml @@ -0,0 +1,147 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!-- + ~ Licensed to the Apache Software Foundation (ASF) under one or more + ~ contributor license agreements. See the NOTICE file distributed with + ~ this work for additional information regarding copyright ownership. + ~ The ASF licenses this file to You under the Apache License, Version 2.0 + ~ (the "License"); you may not use this file except in compliance with + ~ the License. You may obtain a copy of the License at + ~ + ~ http://www.apache.org/licenses/LICENSE-2.0 + ~ + ~ Unless required by applicable law or agreed to in writing, software + ~ distributed under the License is distributed on an "AS IS" BASIS, + ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + ~ See the License for the specific language governing permissions and + ~ limitations under the License. + --> + +<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> + <modelVersion>4.0.0</modelVersion> + + <parent> + <artifactId>zeppelin</artifactId> + <groupId>org.apache.zeppelin</groupId> + <version>0.6.0-incubating-SNAPSHOT</version> + <relativePath>..</relativePath> + </parent> + + <groupId>org.apache.zeppelin</groupId> + <artifactId>zeppelin-elasticsearch</artifactId> + <packaging>jar</packaging> + <version>0.6.0-incubating-SNAPSHOT</version> + <name>Zeppelin: Elasticsearch interpreter</name> + <url>http://www.apache.org</url> + + <properties> + <elasticsearch.version>2.1.0</elasticsearch.version> + <guava.version>18.0</guava.version> + <json-flattener.version>0.1.1</json-flattener.version> + </properties> + + <dependencies> + <dependency> + <groupId>org.apache.zeppelin</groupId> + <artifactId>zeppelin-interpreter</artifactId> + <version>${project.version}</version> + <scope>provided</scope> + </dependency> + + <dependency> + <groupId>org.elasticsearch</groupId> + <artifactId>elasticsearch</artifactId> + <version>${elasticsearch.version}</version> + </dependency> + + <dependency> + <groupId>com.google.guava</groupId> + <artifactId>guava</artifactId> + <version>${guava.version}</version> + </dependency> + + <dependency> + <groupId>com.github.wnameless</groupId> + <artifactId>json-flattener</artifactId> + <version>${json-flattener.version}</version> + </dependency> + + <dependency> + <groupId>org.slf4j</groupId> + <artifactId>slf4j-api</artifactId> + </dependency> + + <dependency> + <groupId>junit</groupId> + <artifactId>junit</artifactId> + <scope>test</scope> + </dependency> + </dependencies> + + <build> + <plugins> + <plugin> + <groupId>org.apache.maven.plugins</groupId> + <artifactId>maven-deploy-plugin</artifactId> + <version>2.7</version> + <configuration> + <skip>true</skip> + </configuration> + </plugin> + + <plugin> + <artifactId>maven-enforcer-plugin</artifactId> + <version>1.3.1</version> + <executions> + <execution> + <id>enforce</id> + <phase>none</phase> + </execution> + </executions> + </plugin> + + <plugin> + <artifactId>maven-dependency-plugin</artifactId> + <version>2.8</version> + <executions> + <execution> + <id>copy-dependencies</id> + <phase>package</phase> + <goals> + <goal>copy-dependencies</goal> + </goals> + <configuration> + <outputDirectory>${project.build.directory}/../../interpreter/elasticsearch</outputDirectory> + <overWriteReleases>false</overWriteReleases> + <overWriteSnapshots>false</overWriteSnapshots> + <overWriteIfNewer>true</overWriteIfNewer> + <includeScope>runtime</includeScope> + </configuration> + </execution> + <execution> + <id>copy-artifact</id> + <phase>package</phase> + <goals> + <goal>copy</goal> + </goals> + <configuration> + <outputDirectory>${project.build.directory}/../../interpreter/elasticsearch</outputDirectory> + <overWriteReleases>false</overWriteReleases> + <overWriteSnapshots>false</overWriteSnapshots> + <overWriteIfNewer>true</overWriteIfNewer> + <includeScope>runtime</includeScope> + <artifactItems> + <artifactItem> + <groupId>${project.groupId}</groupId> + <artifactId>${project.artifactId}</artifactId> + <version>${project.version}</version> + <type>${project.packaging}</type> + </artifactItem> + </artifactItems> + </configuration> + </execution> + </executions> + </plugin> + </plugins> + </build> + +</project> http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/elasticsearch/src/main/java/org/apache/zeppelin/elasticsearch/ElasticsearchInterpreter.java ---------------------------------------------------------------------- diff --git a/elasticsearch/src/main/java/org/apache/zeppelin/elasticsearch/ElasticsearchInterpreter.java b/elasticsearch/src/main/java/org/apache/zeppelin/elasticsearch/ElasticsearchInterpreter.java new file mode 100644 index 0000000..dba5b73 --- /dev/null +++ b/elasticsearch/src/main/java/org/apache/zeppelin/elasticsearch/ElasticsearchInterpreter.java @@ -0,0 +1,465 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.elasticsearch; + +import java.io.IOException; +import java.net.InetAddress; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; +import java.util.Properties; +import java.util.Set; +import java.util.TreeSet; + +import org.apache.commons.lang.StringUtils; +import org.apache.zeppelin.interpreter.Interpreter; +import org.apache.zeppelin.interpreter.InterpreterContext; +import org.apache.zeppelin.interpreter.InterpreterPropertyBuilder; +import org.apache.zeppelin.interpreter.InterpreterResult; +import org.elasticsearch.action.delete.DeleteResponse; +import org.elasticsearch.action.get.GetResponse; +import org.elasticsearch.action.index.IndexResponse; +import org.elasticsearch.action.search.SearchAction; +import org.elasticsearch.action.search.SearchRequestBuilder; +import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.client.Client; +import org.elasticsearch.client.transport.TransportClient; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.transport.InetSocketTransportAddress; +import org.elasticsearch.index.query.QueryBuilders; +import org.elasticsearch.search.SearchHit; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import com.github.wnameless.json.flattener.JsonFlattener; + +import com.google.gson.Gson; +import com.google.gson.GsonBuilder; +import com.google.gson.JsonParseException; + + +/** + * Elasticsearch Interpreter for Zeppelin. + */ +public class ElasticsearchInterpreter extends Interpreter { + + private static Logger logger = LoggerFactory.getLogger(ElasticsearchInterpreter.class); + + private static final String HELP = "Elasticsearch interpreter:\n" + + "General format: <command> /<indices>/<types>/<id> <option> <JSON>\n" + + " - indices: list of indices separated by commas (depends on the command)\n" + + " - types: list of document types separated by commas (depends on the command)\n" + + "Commands:\n" + + " - search /indices/types <query>\n" + + " . indices and types can be omitted (at least, you have to provide '/')\n" + + " . a query is either a JSON-formatted query, nor a lucene query\n" + + " - size <value>\n" + + " . defines the size of the result set (default value is in the config)\n" + + " . if used, this command must be declared before a search command\n" + + " - count /indices/types <query>\n" + + " . same comments as for the search\n" + + " - get /index/type/id\n" + + " - delete /index/type/id\n" + + " - index /ndex/type/id <json-formatted document>\n" + + " . the id can be omitted, elasticsearch will generate one"; + + private static final List<String> COMMANDS = Arrays.asList( + "count", "delete", "get", "help", "index", "search"); + + + public static final String ELASTICSEARCH_HOST = "elasticsearch.host"; + public static final String ELASTICSEARCH_PORT = "elasticsearch.port"; + public static final String ELASTICSEARCH_CLUSTER_NAME = "elasticsearch.cluster.name"; + public static final String ELASTICSEARCH_RESULT_SIZE = "elasticsearch.result.size"; + + static { + Interpreter.register( + "elasticsearch", + "elasticsearch", + ElasticsearchInterpreter.class.getName(), + new InterpreterPropertyBuilder() + .add(ELASTICSEARCH_HOST, "localhost", "The host for Elasticsearch") + .add(ELASTICSEARCH_PORT, "9300", "The port for Elasticsearch") + .add(ELASTICSEARCH_CLUSTER_NAME, "elasticsearch", "The cluster name for Elasticsearch") + .add(ELASTICSEARCH_RESULT_SIZE, "10", "The size of the result set of a search query") + .build()); + } + + private final Gson gson = new GsonBuilder().setPrettyPrinting().create(); + private Client client; + private String host = "localhost"; + private int port = 9300; + private String clusterName = "elasticsearch"; + private int resultSize = 10; + + public ElasticsearchInterpreter(Properties property) { + super(property); + this.host = getProperty(ELASTICSEARCH_HOST); + this.port = Integer.parseInt(getProperty(ELASTICSEARCH_PORT)); + this.clusterName = getProperty(ELASTICSEARCH_CLUSTER_NAME); + this.resultSize = Integer.parseInt(getProperty(ELASTICSEARCH_RESULT_SIZE)); + } + + @Override + public void open() { + try { + logger.info("prop={}", getProperty()); + final Settings settings = Settings.settingsBuilder() + .put("cluster.name", clusterName) + .put(getProperty()) + .build(); + client = TransportClient.builder().settings(settings).build() + .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(host), port)); + } + catch (IOException e) { + logger.error("Open connection with Elasticsearch", e); + } + } + + @Override + public void close() { + if (client != null) { + client.close(); + } + } + + @Override + public InterpreterResult interpret(String cmd, InterpreterContext interpreterContext) { + logger.info("Run Elasticsearch command '" + cmd + "'"); + + int currentResultSize = resultSize; + + if (client == null) { + return new InterpreterResult(InterpreterResult.Code.ERROR, + "Problem with the Elasticsearch client, please check your configuration (host, port,...)"); + } + + String[] items = StringUtils.split(cmd.trim(), " ", 3); + + // Process some specific commands (help, size, ...) + if ("help".equalsIgnoreCase(items[0])) { + return processHelp(InterpreterResult.Code.SUCCESS, null); + } + + if ("size".equalsIgnoreCase(items[0])) { + // In this case, the line with size must be followed by a search, + // so we will continue with the next lines + final String[] lines = StringUtils.split(cmd.trim(), "\n", 2); + + if (lines.length < 2) { + return processHelp(InterpreterResult.Code.ERROR, + "Size cmd must be followed by a search"); + } + + final String[] sizeLine = StringUtils.split(lines[0], " ", 2); + if (sizeLine.length != 2) { + return processHelp(InterpreterResult.Code.ERROR, "Right format is : size <value>"); + } + currentResultSize = Integer.parseInt(sizeLine[1]); + + items = StringUtils.split(lines[1].trim(), " ", 3); + } + + if (items.length < 2) { + return processHelp(InterpreterResult.Code.ERROR, "Arguments missing"); + } + + final String method = items[0]; + final String url = items[1]; + final String data = items.length > 2 ? items[2].trim() : null; + + final String[] urlItems = StringUtils.split(url.trim(), "/"); + + try { + if ("get".equalsIgnoreCase(method)) { + return processGet(urlItems); + } + else if ("count".equalsIgnoreCase(method)) { + return processCount(urlItems, data); + } + else if ("search".equalsIgnoreCase(method)) { + return processSearch(urlItems, data, currentResultSize); + } + else if ("index".equalsIgnoreCase(method)) { + return processIndex(urlItems, data); + } + else if ("delete".equalsIgnoreCase(method)) { + return processDelete(urlItems); + } + + return processHelp(InterpreterResult.Code.ERROR, "Unknown command"); + } + catch (Exception e) { + return new InterpreterResult(InterpreterResult.Code.ERROR, "Error : " + e.getMessage()); + } + } + + @Override + public void cancel(InterpreterContext interpreterContext) { + // Nothing to do + } + + @Override + public FormType getFormType() { + return FormType.SIMPLE; + } + + @Override + public int getProgress(InterpreterContext interpreterContext) { + return 0; + } + + @Override + public List<String> completion(String s, int i) { + final List<String> suggestions = new ArrayList<>(); + + if (StringUtils.isEmpty(s)) { + suggestions.addAll(COMMANDS); + } + else { + for (String cmd : COMMANDS) { + if (cmd.toLowerCase().contains(s)) { + suggestions.add(cmd); + } + } + } + + return suggestions; + } + + private InterpreterResult processHelp(InterpreterResult.Code code, String additionalMessage) { + final StringBuffer buffer = new StringBuffer(); + + if (additionalMessage != null) { + buffer.append(additionalMessage).append("\n"); + } + + buffer.append(HELP).append("\n"); + + return new InterpreterResult(code, InterpreterResult.Type.TEXT, buffer.toString()); + } + + /** + * Processes a "get" request. + * + * @param urlItems Items of the URL + * @return Result of the get request, it contains a JSON-formatted string + */ + private InterpreterResult processGet(String[] urlItems) { + + if (urlItems.length != 3 + || StringUtils.isEmpty(urlItems[0]) + || StringUtils.isEmpty(urlItems[1]) + || StringUtils.isEmpty(urlItems[2])) { + return new InterpreterResult(InterpreterResult.Code.ERROR, + "Bad URL (it should be /index/type/id)"); + } + + final GetResponse response = client + .prepareGet(urlItems[0], urlItems[1], urlItems[2]) + .get(); + if (response.isExists()) { + final String json = gson.toJson(response.getSource()); + + return new InterpreterResult( + InterpreterResult.Code.SUCCESS, + InterpreterResult.Type.TEXT, + json); + } + + return new InterpreterResult(InterpreterResult.Code.ERROR, "Document not found"); + } + + /** + * Processes a "count" request. + * + * @param urlItems Items of the URL + * @param data May contains the JSON of the request + * @return Result of the count request, it contains the total hits + */ + private InterpreterResult processCount(String[] urlItems, String data) { + + if (urlItems.length > 2) { + return new InterpreterResult(InterpreterResult.Code.ERROR, + "Bad URL (it should be /index1,index2,.../type1,type2,...)"); + } + + final SearchResponse response = searchData(urlItems, data, 0); + + return new InterpreterResult( + InterpreterResult.Code.SUCCESS, + InterpreterResult.Type.TEXT, + "" + response.getHits().getTotalHits()); + } + + /** + * Processes a "search" request. + * + * @param urlItems Items of the URL + * @param data May contains the limit and the JSON of the request + * @return Result of the search request, it contains a tab-formatted string of the matching hits + */ + private InterpreterResult processSearch(String[] urlItems, String data, int size) { + + if (urlItems.length > 2) { + return new InterpreterResult(InterpreterResult.Code.ERROR, + "Bad URL (it should be /index1,index2,.../type1,type2,...)"); + } + + final SearchResponse response = searchData(urlItems, data, size); + + return new InterpreterResult( + InterpreterResult.Code.SUCCESS, + InterpreterResult.Type.TABLE, + buildResponseMessage(response.getHits().getHits())); + } + + /** + * Processes a "index" request. + * + * @param urlItems Items of the URL + * @param data JSON to be indexed + * @return Result of the index request, it contains the id of the document + */ + private InterpreterResult processIndex(String[] urlItems, String data) { + + if (urlItems.length < 2 || urlItems.length > 3) { + return new InterpreterResult(InterpreterResult.Code.ERROR, + "Bad URL (it should be /index/type or /index/type/id)"); + } + + final IndexResponse response = client + .prepareIndex(urlItems[0], urlItems[1], urlItems.length == 2 ? null : urlItems[2]) + .setSource(data) + .get(); + + return new InterpreterResult( + InterpreterResult.Code.SUCCESS, + InterpreterResult.Type.TEXT, + response.getId()); + } + + /** + * Processes a "delete" request. + * + * @param urlItems Items of the URL + * @return Result of the delete request, it contains the id of the deleted document + */ + private InterpreterResult processDelete(String[] urlItems) { + + if (urlItems.length != 3 + || StringUtils.isEmpty(urlItems[0]) + || StringUtils.isEmpty(urlItems[1]) + || StringUtils.isEmpty(urlItems[2])) { + return new InterpreterResult(InterpreterResult.Code.ERROR, + "Bad URL (it should be /index/type/id)"); + } + + final DeleteResponse response = client + .prepareDelete(urlItems[0], urlItems[1], urlItems[2]) + .get(); + + if (response.isFound()) { + return new InterpreterResult( + InterpreterResult.Code.SUCCESS, + InterpreterResult.Type.TEXT, + response.getId()); + } + + return new InterpreterResult(InterpreterResult.Code.ERROR, "Document not found"); + } + + private SearchResponse searchData(String[] urlItems, String query, int size) { + + final SearchRequestBuilder reqBuilder = new SearchRequestBuilder( + client, SearchAction.INSTANCE); + reqBuilder.setIndices(); + + if (urlItems.length >= 1) { + reqBuilder.setIndices(StringUtils.split(urlItems[0], ",")); + } + if (urlItems.length > 1) { + reqBuilder.setTypes(StringUtils.split(urlItems[1], ",")); + } + + if (!StringUtils.isEmpty(query)) { + // The query can be either JSON-formatted, nor a Lucene query + // So, try to parse as a JSON => if there is an error, consider the query a Lucene one + try { + final Map source = gson.fromJson(query, Map.class); + reqBuilder.setExtraSource(source); + } + catch (JsonParseException e) { + // This is not a JSON (or maybe not well formatted...) + reqBuilder.setQuery(QueryBuilders.queryStringQuery(query).analyzeWildcard(true)); + } + } + + reqBuilder.setSize(size); + + final SearchResponse response = reqBuilder.get(); + + return response; + } + + private String buildResponseMessage(SearchHit[] hits) { + + if (hits == null || hits.length == 0) { + return ""; + } + + //First : get all the keys in order to build an ordered list of the values for each hit + // + final List<Map<String, Object>> flattenHits = new LinkedList<>(); + final Set<String> keys = new TreeSet<>(); + for (SearchHit hit : hits) { + final String json = hit.getSourceAsString(); + final Map<String, Object> flattenMap = JsonFlattener.flattenAsMap(json); + flattenHits.add(flattenMap); + + for (String key : flattenMap.keySet()) { + keys.add(key); + } + } + + // Next : build the header of the table + // + final StringBuffer buffer = new StringBuffer(); + for (String key : keys) { + buffer.append(key).append('\t'); + } + buffer.replace(buffer.lastIndexOf("\t"), buffer.lastIndexOf("\t") + 1, "\n"); + + // Finally : build the result by using the key set + // + for (Map<String, Object> hit : flattenHits) { + for (String key : keys) { + final Object val = hit.get(key); + if (val != null) { + buffer.append(val); + } + buffer.append('\t'); + } + buffer.replace(buffer.lastIndexOf("\t"), buffer.lastIndexOf("\t") + 1, "\n"); + } + + return buffer.toString(); + } +} http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/elasticsearch/src/test/java/org/apache/zeppelin/elasticsearch/ElasticsearchInterpreterTest.java ---------------------------------------------------------------------- diff --git a/elasticsearch/src/test/java/org/apache/zeppelin/elasticsearch/ElasticsearchInterpreterTest.java b/elasticsearch/src/test/java/org/apache/zeppelin/elasticsearch/ElasticsearchInterpreterTest.java new file mode 100644 index 0000000..839be66 --- /dev/null +++ b/elasticsearch/src/test/java/org/apache/zeppelin/elasticsearch/ElasticsearchInterpreterTest.java @@ -0,0 +1,171 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.zeppelin.elasticsearch; + +import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; +import static org.junit.Assert.assertEquals; + +import java.io.File; +import java.io.IOException; +import java.net.InetAddress; +import java.util.Arrays; +import java.util.Date; +import java.util.Properties; +import java.util.UUID; + +import org.apache.commons.lang.math.RandomUtils; +import org.apache.zeppelin.interpreter.InterpreterResult; +import org.apache.zeppelin.interpreter.InterpreterResult.Code; +import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest; +import org.elasticsearch.client.Client; +import org.elasticsearch.client.transport.TransportClient; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.transport.InetSocketTransportAddress; +import org.elasticsearch.node.Node; +import org.elasticsearch.node.NodeBuilder; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.Test; + +public class ElasticsearchInterpreterTest { + + private static Client elsClient; + private static Node elsNode; + private static ElasticsearchInterpreter interpreter; + + private static final String[] METHODS = { "GET", "PUT", "DELETE", "POST" }; + private static final String[] STATUS = { "200", "404", "500", "403" }; + + private static final String ELS_CLUSTER_NAME = "zeppelin-elasticsearch-interpreter-test"; + private static final String ELS_HOST = "localhost"; + private static final String ELS_TRANSPORT_PORT = "10300"; + private static final String ELS_HTTP_PORT = "10200"; + private static final String ELS_PATH = "/tmp/els"; + + + @BeforeClass + public static void populate() throws IOException { + + final Settings settings = Settings.settingsBuilder() + .put("cluster.name", ELS_CLUSTER_NAME) + .put("network.host", ELS_HOST) + .put("http.port", ELS_HTTP_PORT) + .put("transport.tcp.port", ELS_TRANSPORT_PORT) + .put("path.home", ELS_PATH) + .build(); + + elsNode = NodeBuilder.nodeBuilder().settings(settings).node(); + elsClient = elsNode.client(); + + for (int i = 0; i < 50; i++) { + elsClient.prepareIndex("logs", "http", "" + i) + .setRefresh(true) + .setSource(jsonBuilder() + .startObject() + .field("date", new Date()) + .startObject("request") + .field("method", METHODS[RandomUtils.nextInt(METHODS.length)]) + .field("url", "/zeppelin/" + UUID.randomUUID().toString()) + .field("headers", Arrays.asList("Accept: *.*", "Host: apache.org")) + .endObject() + .field("status", STATUS[RandomUtils.nextInt(STATUS.length)]) + ) + .get(); + } + + final Properties props = new Properties(); + props.put(ElasticsearchInterpreter.ELASTICSEARCH_HOST, ELS_HOST); + props.put(ElasticsearchInterpreter.ELASTICSEARCH_PORT, ELS_TRANSPORT_PORT); + props.put(ElasticsearchInterpreter.ELASTICSEARCH_CLUSTER_NAME, ELS_CLUSTER_NAME); + interpreter = new ElasticsearchInterpreter(props); + interpreter.open(); + } + + @AfterClass + public static void clean() { + if (interpreter != null) { + interpreter.close(); + } + + if (elsClient != null) { + elsClient.admin().indices().delete(new DeleteIndexRequest("logs")).actionGet(); + elsClient.close(); + } + + if (elsNode != null) { + elsNode.close(); + } + } + + @Test + public void testCount() { + + InterpreterResult res = interpreter.interpret("count /unknown", null); + assertEquals(Code.ERROR, res.code()); + + res = interpreter.interpret("count /logs", null); + assertEquals("50", res.message()); + } + + @Test + public void testGet() { + + InterpreterResult res = interpreter.interpret("get /logs/http/unknown", null); + assertEquals(Code.ERROR, res.code()); + + res = interpreter.interpret("get /logs/http/10", null); + assertEquals(Code.SUCCESS, res.code()); + } + + @Test + public void testSearch() { + + InterpreterResult res = interpreter.interpret("size 10\nsearch /logs *", null); + assertEquals(Code.SUCCESS, res.code()); + + res = interpreter.interpret("search /logs {{{hello}}}", null); + assertEquals(Code.ERROR, res.code()); + + res = interpreter.interpret("search /logs { \"query\": { \"match\": { \"status\": 500 } } }", null); + assertEquals(Code.SUCCESS, res.code()); + + res = interpreter.interpret("search /logs status:404", null); + assertEquals(Code.SUCCESS, res.code()); + } + + @Test + public void testIndex() { + + InterpreterResult res = interpreter.interpret("index /logs { \"date\": \"" + new Date() + "\", \"method\": \"PUT\", \"status\": \"500\" }", null); + assertEquals(Code.ERROR, res.code()); + + res = interpreter.interpret("index /logs/http { \"date\": \"2015-12-06T14:54:23.368Z\", \"method\": \"PUT\", \"status\": \"500\" }", null); + assertEquals(Code.SUCCESS, res.code()); + } + + @Test + public void testDelete() { + + InterpreterResult res = interpreter.interpret("delete /logs/http/unknown", null); + assertEquals(Code.ERROR, res.code()); + + res = interpreter.interpret("delete /logs/http/11", null); + assertEquals("11", res.message()); + } + +} http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/pom.xml ---------------------------------------------------------------------- diff --git a/pom.xml b/pom.xml index 1984cca..5e492fa 100755 --- a/pom.xml +++ b/pom.xml @@ -100,6 +100,7 @@ <module>kylin</module> <module>lens</module> <module>cassandra</module> + <module>elasticsearch</module> <module>zeppelin-web</module> <module>zeppelin-server</module> <module>zeppelin-distribution</module> http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/zeppelin-distribution/src/bin_license/LICENSE ---------------------------------------------------------------------- diff --git a/zeppelin-distribution/src/bin_license/LICENSE b/zeppelin-distribution/src/bin_license/LICENSE index 76e5b23..1cce5dc 100644 --- a/zeppelin-distribution/src/bin_license/LICENSE +++ b/zeppelin-distribution/src/bin_license/LICENSE @@ -66,6 +66,31 @@ The following components are provided under Apache License. (Apache 2.0) lz4-java (net.jpountz.lz4:lz4:jar:1.3.0 - https://github.com/jpountz/lz4-java) (Apache 2.0) RoaringBitmap (org.roaringbitmap:RoaringBitmap:jar:0.4.5 - https://github.com/lemire/RoaringBitmap) (Apache 2.0) json4s (org.json4s:json4s-ast_2.10:jar:3.2.10 - https://github.com/json4s/json4s) + (Apache 2.0) HPPC Collections (com.carrotsearch:hppc:0.7.1 - http://labs.carrotsearch.com/hppc.html/hppc) + (Apache 2.0) Jackson-dataformat-CBOR (com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:2.6.2 - http://wiki.fasterxml.com/JacksonForCbor) + (Apache 2.0) Jackson-dataformat-Smile (com.fasterxml.jackson.dataformat:jackson-dataformat-smile:2.6.2 - http://wiki.fasterxml.com/JacksonForSmile) + (Apache 2.0) Jackson-dataformat-YAML (com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:2.6.2 - https://github.com/FasterXML/jackson) + (Apache 2.0) json-flattener (com.github.wnameless:json-flattener:0.1.1 - https://github.com/wnameless/json-flattener) + (Apache 2.0) Spatial4J (com.spatial4j:spatial4j:0.4.1 - https://github.com/spatial4j/spatial4j) + (Apache 2.0) T-Digest (com.tdunning:t-digest:3.0 - https://github.com/tdunning/t-digest) + (Apache 2.0) Netty (io.netty:netty:3.10.5.Final - http://netty.io/) + (Apache 2.0) Lucene Common Analyzers (org.apache.lucene:lucene-analyzers-common:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-analyzers-common) + (Apache 2.0) Lucene Memory (org.apache.lucene:lucene-backward-codecs:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-backward-codecs) + (Apache 2.0) Lucene Core (org.apache.lucene:lucene-core:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-core) + (Apache 2.0) Lucene Grouping (org.apache.lucene:lucene-grouping:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-grouping) + (Apache 2.0) Lucene Highlighter (org.apache.lucene:lucene-highlighter:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-highlighter) + (Apache 2.0) Lucene Join (org.apache.lucene:lucene-join:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-join) + (Apache 2.0) Lucene Memory (org.apache.lucene:lucene-memory:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-memory) + (Apache 2.0) Lucene Miscellaneous (org.apache.lucene:lucene-misc:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-misc) + (Apache 2.0) Lucene Queries (org.apache.lucene:lucene-queries:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-queries) + (Apache 2.0) Lucene QueryParsers (org.apache.lucene:lucene-queryparser:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-queryparser) + (Apache 2.0) Lucene Sandbox (org.apache.lucene:lucene-sandbox:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-sandbox) + (Apache 2.0) Lucene Spatial (org.apache.lucene:lucene-spatial:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-spatial) + (Apache 2.0) Lucene Spatial 3D (org.apache.lucene:lucene-spatial3d:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-spatial3d) + (Apache 2.0) Lucene Suggest (org.apache.lucene:lucene-suggest:5.3.1 - http://lucene.apache.org/lucene-parent/lucene-suggest) + (Apache 2.0) Elasticsearch: Core (org.elasticsearch:elasticsearch:2.1.0 - http://nexus.sonatype.org/oss-repository-hosting.html/parent/elasticsearch) + (Apache 2.0) Joda convert (org.joda:joda-convert:1.2 - http://joda-convert.sourceforge.net) + (Apache 2.0) SnakeYAML (org.yaml:snakeyaml:1.15 - http://www.snakeyaml.org) @@ -104,6 +129,7 @@ The following components are provided under the MIT License. (The MIT License) Objenesis (org.objenesis:objenesis:2.1 - https://github.com/easymock/objenesis) - Copyright (c) 2006-2015 the original author and authors (The MIT License) JCL 1.1.1 implemented over SLF4J (org.slf4j:jcl-over-slf4j:1.7.5 - http://www.slf4j.org) (The MIT License) JUL to SLF4J bridge (org.slf4j:jul-to-slf4j:1.7.5 - http://www.slf4j.org) + (The MIT License) minimal-json (com.eclipsesource.minimal-json:minimal-json:0.9.4 - https://github.com/ralfstx/minimal-json) @@ -178,3 +204,10 @@ See licenses/LICENSE-postgresql +======================================================================== +Creative Commons CC0 (http://creativecommons.org/publicdomain/zero/1.0/) +======================================================================== + + (CC0 1.0 Universal) JSR166e (com.twitter:jsr166e:1.1.0 - http://github.com/twitter/jsr166e) + (Public Domain, per Creative Commons CC0) HdrHistogram (org.hdrhistogram:HdrHistogram:2.1.6 - http://hdrhistogram.github.io/HdrHistogram/) + http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/7e902832/zeppelin-zengine/src/main/java/org/apache/zeppelin/conf/ZeppelinConfiguration.java ---------------------------------------------------------------------- diff --git a/zeppelin-zengine/src/main/java/org/apache/zeppelin/conf/ZeppelinConfiguration.java b/zeppelin-zengine/src/main/java/org/apache/zeppelin/conf/ZeppelinConfiguration.java index 72b6a3c..a4d23e9 100755 --- a/zeppelin-zengine/src/main/java/org/apache/zeppelin/conf/ZeppelinConfiguration.java +++ b/zeppelin-zengine/src/main/java/org/apache/zeppelin/conf/ZeppelinConfiguration.java @@ -413,7 +413,8 @@ public class ZeppelinConfiguration extends XMLConfiguration { + "org.apache.zeppelin.cassandra.CassandraInterpreter," + "org.apache.zeppelin.geode.GeodeOqlInterpreter," + "org.apache.zeppelin.postgresql.PostgreSqlInterpreter," - + "org.apache.zeppelin.kylin.KylinInterpreter"), + + "org.apache.zeppelin.kylin.KylinInterpreter," + + "org.apache.zeppelin.elasticsearch.ElasticsearchInterpreter"), ZEPPELIN_INTERPRETER_DIR("zeppelin.interpreter.dir", "interpreter"), ZEPPELIN_INTERPRETER_CONNECT_TIMEOUT("zeppelin.interpreter.connect.timeout", 30000), ZEPPELIN_ENCODING("zeppelin.encoding", "UTF-8"),