[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239536#comment-16239536
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user Vlad-Storona commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r148957497
  
--- Diff: contrib/storage-opentsdb/README.md ---
@@ -0,0 +1,64 @@
+# drill-storage-openTSDB
+
+Implementation of TSDB storage plugin. Plugin uses REST API to work with 
TSDB. 
+
+For more information about openTSDB follow this link 
+
+There is list of required params:
+
+* metric - The name of a metric stored in the db.
+
+* start  - The start time for the query. This can be a relative or 
absolute timestamp.
+
+* aggregator - The name of an aggregation function to use.
+
+optional param is: 
+
+* downsample - An optional downsampling function to reduce the amount of 
data returned.
+
+* end - An end time for the query. If not supplied, the TSD will assume 
the local system time on the server. 
+This may be a relative or absolute timestamp. This param is optional, and 
if it isn't specified we will send null
+to the db in this field, but in this case db will assume the local system 
time on the server.
+
+List of supported aggregators
+
+
+
+List of supported time 
+
+
+
+Params must be specified in FROM clause of the query separated by commas. 
For example
+
+`openTSDB.(metric=metric_name, start=4d-ago, aggregator=sum)`
+
+Supported queries for now are listed below:
+
+```
+USE openTSDB
+```
+
+```
+SHOW tables
+```
+Will print available metrics. Max number of the printed results is a 
Integer.MAX value
+
+```
+SELECT * FROM openTSDB. `(metric=warp.speed.test, start=47y-ago, 
aggregator=sum)` 
+```
+Return aggregated elements from `warp.speed.test` table since 47y-ago 
+
+```
+SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg, 
start=47y-ago)`
+```
+Return aggregated elements from `warp.speed.test` table
+
+```
+SELECT `timestamp`, sum(`aggregated value`) FROM 
openTSDB.`(metric=warp.speed.test, aggregator=avg, start=47y-ago)` GROUP BY 
`timestamp`
--- End diff --

Ok, I will add it.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239537#comment-16239537
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user Vlad-Storona commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r148957500
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/Constants.java
 ---
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+public interface Constants {
+  /**
+   * openTSDB required constants for API call
+   */
+  public static final String DEFAULT_TIME = "47y-ago";
--- End diff --

Yes, I remember that, but when we execute the query like `show tables;` 
drill must create the schema of the table. To create a schema we need to send a 
request to the db, but for this, we need the required params. But from this 
query, we can get only the metric name. That is why I use this default params.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239541#comment-16239541
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user Vlad-Storona commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r148957509
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/Schema.java
 ---
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client;
+
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.*;
+import static org.apache.drill.exec.store.openTSDB.Util.getValidTableName;
+import static 
org.apache.drill.exec.store.openTSDB.client.Schema.DefaultColumns.AGGREGATED_VALUE;
+import static 
org.apache.drill.exec.store.openTSDB.client.Schema.DefaultColumns.AGGREGATE_TAGS;
+import static 
org.apache.drill.exec.store.openTSDB.client.Schema.DefaultColumns.METRIC;
+import static 
org.apache.drill.exec.store.openTSDB.client.Schema.DefaultColumns.TIMESTAMP;
+
+/**
+ * Abstraction for representing structure of openTSDB table
+ */
+public class Schema {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(Schema.class);
+
+  private final List columns = new ArrayList<>();
+  private final Service db;
+  private final String name;
+
+  public Schema(Service db, String name) {
+this.db = db;
+this.name = name;
+setupStructure();
+  }
+
+  private void setupStructure() {
+columns.add(new ColumnDTO(METRIC.toString(), OpenTSDBTypes.STRING));
+columns.add(new ColumnDTO(AGGREGATE_TAGS.toString(), 
OpenTSDBTypes.STRING));
+columns.add(new ColumnDTO(TIMESTAMP.toString(), 
OpenTSDBTypes.TIMESTAMP));
+columns.add(new ColumnDTO(AGGREGATED_VALUE.toString(), 
OpenTSDBTypes.DOUBLE));
+columns.addAll(db.getUnfixedColumns(getParamsForQuery()));
+  }
+
+  /**
+   * Return list with all columns names and its types
+   *
+   * @return List
+   */
+  public List getColumns() {
+return Collections.unmodifiableList(columns);
+  }
+
+  /**
+   * Number of columns in table
+   *
+   * @return number of table columns
+   */
+  public int getColumnCount() {
+return columns.size();
+  }
+
+  /**
+   * @param columnIndex index of required column in table
+   * @return ColumnDTO
+   */
+  public ColumnDTO getColumnByIndex(int columnIndex) {
+return columns.get(columnIndex);
+  }
+
+  // Create map with required params, for querying metrics.
+  // Without this params, we cannot make API request to db.
+  private HashMap getParamsForQuery() {
+HashMap params = new HashMap<>();
+params.put(METRIC_PARAM, getValidTableName(name));
+params.put(AGGREGATOR_PARAM, SUM_AGGREGATOR);
+params.put(TIME_PARAM, DEFAULT_TIME);
--- End diff --

Explained this in the comment above.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELE

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239543#comment-16239543
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user Vlad-Storona commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r148957514
  
--- Diff: 
contrib/storage-opentsdb/src/test/java/org/apache/drill/store/openTSDB/TestOpenTSDBPlugin.java
 ---
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.store.openTSDB;
+
+import com.github.tomakehurst.wiremock.junit.WireMockRule;
+import org.apache.drill.PlanTestBase;
+import org.apache.drill.common.exceptions.UserRemoteException;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePlugin;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePluginConfig;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+
+import static com.github.tomakehurst.wiremock.client.WireMock.aResponse;
+import static com.github.tomakehurst.wiremock.client.WireMock.equalToJson;
+import static com.github.tomakehurst.wiremock.client.WireMock.get;
+import static com.github.tomakehurst.wiremock.client.WireMock.post;
+import static com.github.tomakehurst.wiremock.client.WireMock.urlEqualTo;
+import static 
org.apache.drill.store.openTSDB.TestDataHolder.DOWNSAMPLE_REQUEST_WITH_TAGS;
+import static 
org.apache.drill.store.openTSDB.TestDataHolder.DOWNSAMPLE_REQUEST_WTIHOUT_TAGS;
+import static 
org.apache.drill.store.openTSDB.TestDataHolder.POST_REQUEST_WITHOUT_TAGS;
+import static 
org.apache.drill.store.openTSDB.TestDataHolder.POST_REQUEST_WITH_TAGS;
+import static 
org.apache.drill.store.openTSDB.TestDataHolder.REQUEST_TO_NONEXISTENT_METRIC;
+import static 
org.apache.drill.store.openTSDB.TestDataHolder.SAMPLE_DATA_FOR_GET_TABLE_NAME_REQUEST;
+import static 
org.apache.drill.store.openTSDB.TestDataHolder.SAMPLE_DATA_FOR_GET_TABLE_REQUEST;
+import static 
org.apache.drill.store.openTSDB.TestDataHolder.SAMPLE_DATA_FOR_POST_DOWNSAMPLE_REQUEST_WITHOUT_TAGS;
+import static 
org.apache.drill.store.openTSDB.TestDataHolder.SAMPLE_DATA_FOR_POST_DOWNSAMPLE_REQUEST_WITH_TAGS;
+import static 
org.apache.drill.store.openTSDB.TestDataHolder.SAMPLE_DATA_FOR_POST_REQUEST_WITH_TAGS;
+
+public class TestOpenTSDBPlugin extends PlanTestBase {
+
+  protected static OpenTSDBStoragePlugin storagePlugin;
+  protected static OpenTSDBStoragePluginConfig storagePluginConfig;
+
+  @Rule
+  public WireMockRule wireMockRule = new WireMockRule(1);
+
+  @BeforeClass
+  public static void addTestDataToDB() throws Exception {
--- End diff --

Ok


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239539#comment-16239539
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user Vlad-Storona commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r148957506
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/Util.java
 ---
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.base.Splitter;
+
+import java.util.Map;
+
+public class Util {
+
+  /**
+   * Parse FROM parameters to Map representation
+   *
+   * @param rowData with this syntax (metric=warp.speed.test)
+   * @return Map with params key: metric, value: warp.speed.test
+   */
+  public static Map parseFromRowData(String rowData) {
+String FROMRowData = rowData.replaceAll("[()]", "");
--- End diff --

I will rename it.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239540#comment-16239540
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user Vlad-Storona commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r148957508
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/Schema.java
 ---
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client;
+
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.*;
+import static org.apache.drill.exec.store.openTSDB.Util.getValidTableName;
+import static 
org.apache.drill.exec.store.openTSDB.client.Schema.DefaultColumns.AGGREGATED_VALUE;
+import static 
org.apache.drill.exec.store.openTSDB.client.Schema.DefaultColumns.AGGREGATE_TAGS;
+import static 
org.apache.drill.exec.store.openTSDB.client.Schema.DefaultColumns.METRIC;
+import static 
org.apache.drill.exec.store.openTSDB.client.Schema.DefaultColumns.TIMESTAMP;
+
+/**
+ * Abstraction for representing structure of openTSDB table
+ */
+public class Schema {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(Schema.class);
+
+  private final List columns = new ArrayList<>();
+  private final Service db;
+  private final String name;
+
+  public Schema(Service db, String name) {
+this.db = db;
+this.name = name;
+setupStructure();
+  }
+
+  private void setupStructure() {
+columns.add(new ColumnDTO(METRIC.toString(), OpenTSDBTypes.STRING));
+columns.add(new ColumnDTO(AGGREGATE_TAGS.toString(), 
OpenTSDBTypes.STRING));
+columns.add(new ColumnDTO(TIMESTAMP.toString(), 
OpenTSDBTypes.TIMESTAMP));
+columns.add(new ColumnDTO(AGGREGATED_VALUE.toString(), 
OpenTSDBTypes.DOUBLE));
+columns.addAll(db.getUnfixedColumns(getParamsForQuery()));
+  }
+
+  /**
+   * Return list with all columns names and its types
+   *
+   * @return List
+   */
+  public List getColumns() {
+return Collections.unmodifiableList(columns);
+  }
+
+  /**
+   * Number of columns in table
+   *
+   * @return number of table columns
+   */
+  public int getColumnCount() {
+return columns.size();
+  }
+
+  /**
+   * @param columnIndex index of required column in table
+   * @return ColumnDTO
+   */
+  public ColumnDTO getColumnByIndex(int columnIndex) {
+return columns.get(columnIndex);
+  }
+
+  // Create map with required params, for querying metrics.
+  // Without this params, we cannot make API request to db.
+  private HashMap getParamsForQuery() {
--- End diff --

Ok. 


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Retu

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239538#comment-16239538
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user Vlad-Storona commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r148957504
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBSubScan.java
 ---
@@ -0,0 +1,132 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonTypeName;
+import com.google.common.base.Preconditions;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.exec.physical.base.AbstractBase;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.physical.base.PhysicalVisitor;
+import org.apache.drill.exec.physical.base.SubScan;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+
+@JsonTypeName("openTSDB-tablet-scan")
--- End diff --

Sure, I will rename it.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239542#comment-16239542
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user Vlad-Storona commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r148957511
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/query/DBQuery.java
 ---
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.query;
+
+import org.apache.drill.common.exceptions.UserException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.HashSet;
+import java.util.Set;
+
+/**
+ * DBQuery is an abstraction of an openTSDB query,
+ * that used for extracting data from the storage system by POST request 
to DB.
+ * 
+ * An OpenTSDB query requires at least one sub query,
+ * a means of selecting which time series should be included in the result 
set.
+ */
+public class DBQuery {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(DBQuery.class);
+  /**
+   * The start time for the query. This can be a relative or absolute 
timestamp.
+   */
+  private String start;
+  /**
+   * An end time for the query. If not supplied, the TSD will assume the 
local system time on the server.
+   * This may be a relative or absolute timestamp. This param is optional, 
and if it isn't specified we will send null
+   * to the db in this field, but in this case db will assume the local 
system time on the server.
+   */
+  private String end;
+  /**
+   * One or more sub subQueries used to select the time series to return.
+   */
+  private Set queries;
+
+  private DBQuery(Builder builder) {
+this.start = builder.start;
+this.end = builder.end;
+this.queries = builder.queries;
+  }
+
+  public String getStart() {
+return start;
+  }
+
+  public String getEnd() {
+return end;
+  }
+
+  public Set getQueries() {
+return queries;
+  }
+
+  public static class Builder {
+
+private String start;
+private String end;
+private Set queries = new HashSet<>();
+
+public Builder() {
+}
+
+public Builder setStartTime(String startTime) {
+  if (startTime == null) {
+throw UserException.validationError()
+.message("start param must be specified")
+.build(log);
+  }
+  this.start = startTime;
+  return this;
+}
+
+public Builder setEndTime(String endTime) {
+  this.end = endTime;
+  return this;
+}
+
+public Builder setQueries(Set queries) {
+  if (queries.isEmpty()) {
+throw UserException.validationError()
+.message("Required params such as metric, aggregator 
weren't specified. " +
+"Add this params to the query")
--- End diff --

Ok


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SEL

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239577#comment-16239577
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/774
  
@Vlad-Storona I have deployed your branch and have noticed several more 
issues:
1. When I openTSDB storage plugin tab for the first time, its content is 
showing as null.
2. When I have indicated incorrect connection, the following exception is 
displayed in the log:
```
 org.apache.drill.common.exceptions.ExecutionSetupException: Failure 
setting up new storage plugin configuration for config 
org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePluginConfig@a837d5e8
```
3. When I try to query with incorrect syntax, the following error is 
displayed:
```
 0: jdbc:drill:drillbit=localhost> select * from `mymetric.stock`;
Error: SYSTEM ERROR: IllegalArgumentException: Chunk [mymetric.stock] is 
not a valid entry
 
 Caused by: java.lang.IllegalArgumentException: Chunk [mymetric.stock] is 
not a valid entry
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:145) 
~[guava-18.0.jar:na]
at com.google.common.base.Splitter$MapSplitter.split(Splitter.java:508) 
~[guava-18.0.jar:na]
```
4. When I try to enable storage plugin config with incorrect connection, it 
fails. I think we should allow enabling config even if we could not connect to 
openTSDB as others plugins do but fail when we try to query the data.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239578#comment-16239578
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r148959786
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/Schema.java
 ---
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client;
+
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.*;
+import static org.apache.drill.exec.store.openTSDB.Util.getValidTableName;
+import static 
org.apache.drill.exec.store.openTSDB.client.Schema.DefaultColumns.AGGREGATED_VALUE;
+import static 
org.apache.drill.exec.store.openTSDB.client.Schema.DefaultColumns.AGGREGATE_TAGS;
+import static 
org.apache.drill.exec.store.openTSDB.client.Schema.DefaultColumns.METRIC;
+import static 
org.apache.drill.exec.store.openTSDB.client.Schema.DefaultColumns.TIMESTAMP;
+
+/**
+ * Abstraction for representing structure of openTSDB table
+ */
+public class Schema {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(Schema.class);
+
+  private final List columns = new ArrayList<>();
+  private final Service db;
+  private final String name;
+
+  public Schema(Service db, String name) {
+this.db = db;
+this.name = name;
+setupStructure();
+  }
+
+  private void setupStructure() {
+columns.add(new ColumnDTO(METRIC.toString(), OpenTSDBTypes.STRING));
+columns.add(new ColumnDTO(AGGREGATE_TAGS.toString(), 
OpenTSDBTypes.STRING));
+columns.add(new ColumnDTO(TIMESTAMP.toString(), 
OpenTSDBTypes.TIMESTAMP));
+columns.add(new ColumnDTO(AGGREGATED_VALUE.toString(), 
OpenTSDBTypes.DOUBLE));
+columns.addAll(db.getUnfixedColumns(getParamsForQuery()));
+  }
+
+  /**
+   * Return list with all columns names and its types
+   *
+   * @return List
+   */
+  public List getColumns() {
+return Collections.unmodifiableList(columns);
+  }
+
+  /**
+   * Number of columns in table
+   *
+   * @return number of table columns
+   */
+  public int getColumnCount() {
+return columns.size();
+  }
+
+  /**
+   * @param columnIndex index of required column in table
+   * @return ColumnDTO
+   */
+  public ColumnDTO getColumnByIndex(int columnIndex) {
+return columns.get(columnIndex);
+  }
+
+  // Create map with required params, for querying metrics.
+  // Without this params, we cannot make API request to db.
+  private HashMap getParamsForQuery() {
+HashMap params = new HashMap<>();
+params.put(METRIC_PARAM, getValidTableName(name));
+params.put(AGGREGATOR_PARAM, SUM_AGGREGATOR);
+params.put(TIME_PARAM, DEFAULT_TIME);
--- End diff --

Please add comment in the code explaining that for this case we need the 
defaults.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.

[jira] [Commented] (DRILL-5923) State of a successfully completed query shown as "COMPLETED"

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239607#comment-16239607
 ] 

ASF GitHub Bot commented on DRILL-5923:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/1021#discussion_r148961643
  
--- Diff: exec/java-exec/src/main/resources/rest/profile/profile.ftl ---
@@ -135,6 +135,8 @@ table.sortable thead .sorting_desc { background-image: 
url("/static/img/black-de
 
   <#assign queueName = model.getProfile().getQueueName() />
   <#assign queued = queueName != "" && queueName != "-" />
+  <#assign queryStateDisplayName = ["Starting", "Running", "Succeeded", 
"Canceled", "Failed",
--- End diff --

Maybe you can create common freemarker function which list.ftl and 
profile.ftl will use to decode the state name thus we won't duplicate display 
names and always be sure they are in sync.


> State of a successfully completed query shown as "COMPLETED"
> 
>
> Key: DRILL-5923
> URL: https://issues.apache.org/jira/browse/DRILL-5923
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - HTTP
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
> Fix For: 1.12.0
>
>
> Drill UI currently lists a successfully completed query as "COMPLETED". 
> Successfully completed, failed and canceled queries are all grouped as 
> Completed queries. 
> It would be better to list the state of a successfully completed query as 
> "Succeeded" to avoid confusion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5923) State of a successfully completed query shown as "COMPLETED"

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239647#comment-16239647
 ] 

ASF GitHub Bot commented on DRILL-5923:
---

Github user prasadns14 commented on a diff in the pull request:

https://github.com/apache/drill/pull/1021#discussion_r148963451
  
--- Diff: exec/java-exec/src/main/resources/rest/profile/profile.ftl ---
@@ -135,6 +135,8 @@ table.sortable thead .sorting_desc { background-image: 
url("/static/img/black-de
 
   <#assign queueName = model.getProfile().getQueueName() />
   <#assign queued = queueName != "" && queueName != "-" />
+  <#assign queryStateDisplayName = ["Starting", "Running", "Succeeded", 
"Canceled", "Failed",
--- End diff --

Yes, I could create a common freemarker function. But if the changes are 
not made in ProfileResources.java then the REST API /profile will still show 
the query state as "COMPLETED". It won't be in sync with the UI.


> State of a successfully completed query shown as "COMPLETED"
> 
>
> Key: DRILL-5923
> URL: https://issues.apache.org/jira/browse/DRILL-5923
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - HTTP
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
> Fix For: 1.12.0
>
>
> Drill UI currently lists a successfully completed query as "COMPLETED". 
> Successfully completed, failed and canceled queries are all grouped as 
> Completed queries. 
> It would be better to list the state of a successfully completed query as 
> "Succeeded" to avoid confusion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5834) Add Networking Functions

2017-11-05 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5834:

Labels: doc-impacting ready-to-commit  (was: doc-impacting)

> Add Networking Functions
> 
>
> Key: DRILL-5834
> URL: https://issues.apache.org/jira/browse/DRILL-5834
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Affects Versions: 1.11.0
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Minor
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.12.0
>
>
> On the heels of the PCAP plugin, this is a collection of functions that would 
> facilitate network analysis using Drill. 
> The functions include:
> inet_aton(): Converts an IPv4 address into an integer.
> inet_ntoa( ): Converts an integer IP into dotted decimal notation
> in_network( , ): Returns true if the IP address is in the given 
> CIDR block
> address_count(  ): Returns the number of IPs in a given CIDR block
> broadcast_address(  ): Returns the broadcast address for a given CIDR 
> block
> netmask( ): Returns the netmask for a given CIDR block.
> low_address(): Returns the first address in a given CIDR block.
> high_address(): Returns the last address in a given CIDR block.
> url_encode(  ): Returns a URL encoded string.
> url_decode(  ): Decodes a URL encoded string.
> is_valid_IP(): Returns true if the IP is a valid IP address
> is_private_ip(): Returns true if the IP is a private IPv4 address
> is_valid_IPv4(): Returns true if the IP is a valid IPv4 address
> is_valid_IPv6(): Returns true if the IP is a valid IPv6 address



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5923) State of a successfully completed query shown as "COMPLETED"

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239688#comment-16239688
 ] 

ASF GitHub Bot commented on DRILL-5923:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/1021#discussion_r148967310
  
--- Diff: exec/java-exec/src/main/resources/rest/profile/profile.ftl ---
@@ -135,6 +135,8 @@ table.sortable thead .sorting_desc { background-image: 
url("/static/img/black-de
 
   <#assign queueName = model.getProfile().getQueueName() />
   <#assign queued = queueName != "" && queueName != "-" />
+  <#assign queryStateDisplayName = ["Starting", "Running", "Succeeded", 
"Canceled", "Failed",
--- End diff --

Prasad, maybe you can come up with different way to avoid display names 
duplication?


> State of a successfully completed query shown as "COMPLETED"
> 
>
> Key: DRILL-5923
> URL: https://issues.apache.org/jira/browse/DRILL-5923
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - HTTP
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
> Fix For: 1.12.0
>
>
> Drill UI currently lists a successfully completed query as "COMPLETED". 
> Successfully completed, failed and canceled queries are all grouped as 
> Completed queries. 
> It would be better to list the state of a successfully completed query as 
> "Succeeded" to avoid confusion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5878) TableNotFound exception is being reported for a wrong storage plugin.

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239703#comment-16239703
 ] 

ASF GitHub Bot commented on DRILL-5878:
---

Github user amansinha100 commented on the issue:

https://github.com/apache/drill/pull/996
  
Merged in 7a2fc87ee20f706d85cb5c90cc441e6b44b71592.  @HanumathRao  pls 
close the PR. 


> TableNotFound exception is being reported for a wrong storage plugin.
> -
>
> Key: DRILL-5878
> URL: https://issues.apache.org/jira/browse/DRILL-5878
> Project: Apache Drill
>  Issue Type: Bug
>  Components: SQL Parser
>Affects Versions: 1.11.0
>Reporter: Hanumath Rao Maduri
>Assignee: Hanumath Rao Maduri
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> Drill is reporting TableNotFound exception for a wrong storage plugin. 
> Consider the following query where employee.json is queried using cp plugin.
> {code}
> 0: jdbc:drill:zk=local> select * from cp.`employee.json` limit 10;
> +--++-++--+-+---++-++--++---+-+-++
> | employee_id  | full_name  | first_name  | last_name  | position_id  
> | position_title  | store_id  | department_id  | birth_date  |   
> hire_date|  salary  | supervisor_id  |  education_level  | 
> marital_status  | gender  |  management_role   |
> +--++-++--+-+---++-++--++---+-+-++
> | 1| Sheri Nowmer   | Sheri   | Nowmer | 1
> | President   | 0 | 1  | 1961-08-26  | 
> 1994-12-01 00:00:00.0  | 8.0  | 0  | Graduate Degree   | S
>| F   | Senior Management  |
> | 2| Derrick Whelply| Derrick | Whelply| 2
> | VP Country Manager  | 0 | 1  | 1915-07-03  | 
> 1994-12-01 00:00:00.0  | 4.0  | 1  | Graduate Degree   | M
>| M   | Senior Management  |
> | 4| Michael Spence | Michael | Spence | 2
> | VP Country Manager  | 0 | 1  | 1969-06-20  | 
> 1998-01-01 00:00:00.0  | 4.0  | 1  | Graduate Degree   | S
>| M   | Senior Management  |
> | 5| Maya Gutierrez | Maya| Gutierrez  | 2
> | VP Country Manager  | 0 | 1  | 1951-05-10  | 
> 1998-01-01 00:00:00.0  | 35000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 6| Roberta Damstra| Roberta | Damstra| 3
> | VP Information Systems  | 0 | 2  | 1942-10-08  | 
> 1994-12-01 00:00:00.0  | 25000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 7| Rebecca Kanagaki   | Rebecca | Kanagaki   | 4
> | VP Human Resources  | 0 | 3  | 1949-03-27  | 
> 1994-12-01 00:00:00.0  | 15000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 8| Kim Brunner| Kim | Brunner| 11   
> | Store Manager   | 9 | 11 | 1922-08-10  | 
> 1998-01-01 00:00:00.0  | 1.0  | 5  | Bachelors Degree  | S
>| F   | Store Management   |
> | 9| Brenda Blumberg| Brenda  | Blumberg   | 11   
> | Store Manager   | 21| 11 | 1979-06-23  | 
> 1998-01-01 00:00:00.0  | 17000.0  | 5  | Graduate Degree   | M
>| F   | Store Management   |
> | 10   | Darren Stanz   | Darren  | Stanz  | 5
> | VP Finance  | 0 | 5  | 1949-08-26  | 
> 1994-12-01 00:00:00.0  | 5.0  | 1  | Partial College   | M
>| M   | Senior Management  |
> | 11   | Jonathan Murraiin  | Jonathan| Murraiin   | 11   
> | Store Manager   | 1 | 11 | 1967-06-20  | 
> 1998-01-01 00:00:00.0  | 15000.0  | 5  | Graduate Degree   | S
>| M   | Store Management   |
> +--++-++--+-

[jira] [Updated] (DRILL-5864) Selecting a non-existing field from a MapR-DB JSON table fails with NPE

2017-11-05 Thread Aman Sinha (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Sinha updated DRILL-5864:
--
Fix Version/s: 1.12.0

> Selecting a non-existing field from a MapR-DB JSON table fails with NPE
> ---
>
> Key: DRILL-5864
> URL: https://issues.apache.org/jira/browse/DRILL-5864
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators, Storage - MapRDB
>Affects Versions: 1.12.0
>Reporter: Abhishek Girish
>Assignee: Hanumath Rao Maduri
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
> Attachments: OrderByNPE.log, OrderByNPE2.log
>
>
> Query 1
> {code}
> > select C_FIRST_NAME,C_BIRTH_COUNTRY,C_BIRTH_YEAR,C_BIRTH_MONTH,C_BIRTH_DAY 
> > from customer ORDER BY C_BIRTH_COUNTRY ASC, C_FIRST_NAME ASC LIMIT 10;
> Error: SYSTEM ERROR: NullPointerException
>   (java.lang.NullPointerException) null
> org.apache.drill.exec.record.SchemaUtil.coerceContainer():176
> 
> org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.convertBatch():124
> org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.add():90
> org.apache.drill.exec.physical.impl.xsort.managed.SortImpl.addBatch():265
> 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch():421
> 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():357
> 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():302
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():115
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():115
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():134
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.physical.impl.BaseRootExec.next():105
> 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81
> org.apache.drill.exec.physical.impl.BaseRootExec.next():95
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():422
> org.apache.hadoop.security.UserGroupInformation.doAs():1595
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():227
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1149
> java.util.concurrent.ThreadPoolExecutor$Worker.run():624
> java.lang.Thread.run():748 (state=,code=0)
> {code}
> Plan
> {code}
> 00-00Screen
> 00-01  Project(C_FIRST_NAME=[$0], C_BIRTH_COUNTRY=[$1], 
> C_BIRTH_YEAR=[$2], C_BIRTH_MONTH=[$3], C_BIRTH_DAY=[$4])
> 00-02SelectionVectorRemover
> 00-03  Limit(fetch=[10])
> 00-04Limit(fetch=[10])
> 00-05  SelectionVectorRemover
> 00-06Sort(sort0=[$1], sort1=[$0], dir0=[ASC], dir1=[ASC])
> 00-07  Scan(groupscan=[JsonTableGroupScan 
> [ScanSpec=JsonScanSpec 
> [tableName=maprfs:///drill/testdata/tpch/sf1/maprdb/json/range/customer, 
> condition=null], columns=[`C_FIRST_N

[jira] [Commented] (DRILL-5864) Selecting a non-existing field from a MapR-DB JSON table fails with NPE

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239705#comment-16239705
 ] 

ASF GitHub Bot commented on DRILL-5864:
---

Github user amansinha100 commented on the issue:

https://github.com/apache/drill/pull/1007
  
Merged in 125a9271d7cf0dfb30aac8e62447507ea0a7d6c9.  @HanumathRao pls close 
the PR (for some reason I don't have permission).  


> Selecting a non-existing field from a MapR-DB JSON table fails with NPE
> ---
>
> Key: DRILL-5864
> URL: https://issues.apache.org/jira/browse/DRILL-5864
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators, Storage - MapRDB
>Affects Versions: 1.12.0
>Reporter: Abhishek Girish
>Assignee: Hanumath Rao Maduri
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
> Attachments: OrderByNPE.log, OrderByNPE2.log
>
>
> Query 1
> {code}
> > select C_FIRST_NAME,C_BIRTH_COUNTRY,C_BIRTH_YEAR,C_BIRTH_MONTH,C_BIRTH_DAY 
> > from customer ORDER BY C_BIRTH_COUNTRY ASC, C_FIRST_NAME ASC LIMIT 10;
> Error: SYSTEM ERROR: NullPointerException
>   (java.lang.NullPointerException) null
> org.apache.drill.exec.record.SchemaUtil.coerceContainer():176
> 
> org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.convertBatch():124
> org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.add():90
> org.apache.drill.exec.physical.impl.xsort.managed.SortImpl.addBatch():265
> 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch():421
> 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():357
> 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():302
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():115
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():115
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():134
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.physical.impl.BaseRootExec.next():105
> 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81
> org.apache.drill.exec.physical.impl.BaseRootExec.next():95
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():422
> org.apache.hadoop.security.UserGroupInformation.doAs():1595
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():227
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1149
> java.util.concurrent.ThreadPoolExecutor$Worker.run():624
> java.lang.Thread.run():748 (state=,code=0)
> {code}
> Plan
> {code}
> 00-00Screen
> 00-01  Project(C_FIRST_NAME=[$0], C_BIRTH_COUNTRY=[$1], 
> C_BIRTH_YEAR=[$2], C_BIRTH_MONTH=[$3], C_BIRTH_DAY=[$4])
> 00-02SelectionVectorRemover
> 00-03  Limit(fetch=[10])
> 00-04Limit(fetch=[10])
> 00-05  SelectionVectorRemover
> 00

[jira] [Created] (DRILL-5933) Support spring-boot package

2017-11-05 Thread Yi Zhao (JIRA)
Yi Zhao created DRILL-5933:
--

 Summary: Support spring-boot package
 Key: DRILL-5933
 URL: https://issues.apache.org/jira/browse/DRILL-5933
 Project: Apache Drill
  Issue Type: New Feature
Reporter: Yi Zhao


I am using apache-drill in my application which requires a web server (tomcat) 
and apache-drill. With the help of spring-boot, I can package all dependencies 
into one big jar file. It really helps to deploy on the production server. 
However, I couldn't package drill into my jar file. It would be great if drill 
can integrate with spring-boot which makes CI/CD much easier. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5934) How to deploy drill by sub-component?

2017-11-05 Thread Yi Zhao (JIRA)
Yi Zhao created DRILL-5934:
--

 Summary: How to deploy drill by sub-component?
 Key: DRILL-5934
 URL: https://issues.apache.org/jira/browse/DRILL-5934
 Project: Apache Drill
  Issue Type: New Feature
Reporter: Yi Zhao


I know drill is great to manage multiple data sources. But its size is quite 
big, more than 200MB. If I need to support `mongodb` and `mysql`, whether I can 
get a package only requires `mongodb` and `mysql` related dependencies? That 
will reduce the size of the deploy artifact.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5934) How to deploy drill by sub-component?

2017-11-05 Thread Paul Rogers (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239793#comment-16239793
 ] 

Paul Rogers commented on DRILL-5934:


At present, Drill is a monolith and requires all components and jars to 
operate. This is one reason that the JDBC jar is so large when it includes the 
ability to start an embedded Drillbit.

> How to deploy drill by sub-component?
> -
>
> Key: DRILL-5934
> URL: https://issues.apache.org/jira/browse/DRILL-5934
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Yi Zhao
>
> I know drill is great to manage multiple data sources. But its size is quite 
> big, more than 200MB. If I need to support `mongodb` and `mysql`, whether I 
> can get a package only requires `mongodb` and `mysql` related dependencies? 
> That will reduce the size of the deploy artifact.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5842) Refactor and simplify the fragment, operator contexts for testing

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239852#comment-16239852
 ] 

ASF GitHub Bot commented on DRILL-5842:
---

Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/978
  
@sohami, pushed a commit that addresses your PR comments.

Regarding the "one stop shop" comment. This PR removes the 
`OperExecContextImpl` class. That class was an earlier attempt to combine the 
three items into one. Additional experience showed that the existing operator 
context could do that task instead.

This PR did not change existing operators to avoid passing around multiple 
items. Instead, it allows new code (for the batch size limitation project) to 
pass just the operator context, and use that to obtain the other two items. The 
external sort had to be changed because it used the old `OperExecContextImpl` 
for that purpose, and so the code had to be updated when `OperExecContextImpl` 
was removed.


> Refactor and simplify the fragment, operator contexts for testing
> -
>
> Key: DRILL-5842
> URL: https://issues.apache.org/jira/browse/DRILL-5842
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
>
> Drill's execution engine has a "fragment context" that provides state for a 
> fragment as a whole, and an "operator context" which provides state for a 
> single operator. Historically, these have both been concrete classes that 
> make generous references to the Drillbit context, and hence need a full Drill 
> server in order to operate.
> Drill has historically made extensive use of system-level testing: build the 
> entire server and fire queries at it to test each component. Over time, we 
> are augmenting that approach with unit tests: the ability to test each 
> operator (or parts of an operator) in isolation.
> Since each operator requires access to both the operator and fragment 
> context, the fact that the contexts depend on the overall server creates a 
> large barrier to unit testing. An earlier checkin started down the path of 
> defining the contexts as interfaces that can have different run-time and 
> test-time implementations to enable testing.
> This ticket asks to refactor those interfaces: simplifying the operator 
> context and introducing an interface for the fragment context. New code will 
> use these new interfaces, while older code continues to use the concrete 
> implementations. Over time, as operators are enhanced, they can be modified 
> to allow unit-level testing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-4990) Use new HDFS API access instead of listStatus to check if users have permissions to access workspace.

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239892#comment-16239892
 ] 

ASF GitHub Bot commented on DRILL-4990:
---

Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/652#discussion_r148991415
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/WorkspaceSchemaFactory.java
 ---
@@ -151,17 +152,32 @@ public WorkspaceSchemaFactory(
*/
   public boolean accessible(final String userName) throws IOException {
 final FileSystem fs = ImpersonationUtil.createFileSystem(userName, 
fsConf);
+boolean tryListStatus = false;
 try {
-  // We have to rely on the listStatus as a FileSystem can have 
complicated controls such as regular unix style
-  // permissions, Access Control Lists (ACLs) or Access Control 
Expressions (ACE). Hadoop 2.7 version of FileSystem
-  // has a limited private API (FileSystem.access) to check the 
permissions directly
-  // (see https://issues.apache.org/jira/browse/HDFS-6570). Drill 
currently relies on Hadoop 2.5.0 version of
-  // FileClient. TODO: Update this when DRILL-3749 is fixed.
-  fs.listStatus(wsPath);
+  // access API checks if a user has certain permissions on a file or 
directory.
+  // returns normally if requested permissions are granted and throws 
an exception
+  // if access is denied. This API was added in HDFS 2.6 (see 
HDFS-6570).
+  // It is less expensive (than listStatus which was being used 
before) and hides the
+  // complicated access control logic underneath.
+  fs.access(wsPath, FsAction.READ);
 } catch (final UnsupportedOperationException e) {
-  logger.trace("The filesystem for this workspace does not support 
this operation.", e);
+  logger.debug("The filesystem for this workspace does not support 
access operation.", e);
+  tryListStatus = true;
 } catch (final FileNotFoundException | AccessControlException e) {
-  return false;
+  logger.debug("file {} not found or cannot be accessed", 
wsPath.toString(), e);
+  tryListStatus = true;
--- End diff --

Is access function never trustworthy for negative cases ? Or is it windows 
platform specific issue ?


> Use new HDFS API access instead of listStatus to check if users have 
> permissions to access workspace.
> -
>
> Key: DRILL-4990
> URL: https://issues.apache.org/jira/browse/DRILL-4990
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning & Optimization
>Affects Versions: 1.8.0
>Reporter: Padma Penumarthy
>Assignee: Padma Penumarthy
>
> For every query, we build the schema tree 
> (runSQL->getPlan->getNewDefaultSchema->getRootSchema). All workspaces in all 
> storage plugins are checked and are added to the schema tree if they are 
> accessible by the user who initiated the query.  For file system plugin, 
> listStatus API is used to check if  the workspace is accessible or not 
> (WorkspaceSchemaFactory.accessible) by the user.  The idea seem to be if the 
> user does not have access to file(s) in the workspace, listStatus will 
> generate an exception and we return false. But, listStatus (which lists all 
> the entries of a directory) is an expensive operation when there are large 
> number of files in the directory. A new API is added in Hadoop 2.6 called 
> access (HDFS-6570) which provides the ability to check if the user has 
> permissions on a file/directory.  Use this new API instead of listStatus. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-4990) Use new HDFS API access instead of listStatus to check if users have permissions to access workspace.

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239893#comment-16239893
 ] 

ASF GitHub Bot commented on DRILL-4990:
---

Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/652#discussion_r148991210
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/WorkspaceSchemaFactory.java
 ---
@@ -151,17 +152,32 @@ public WorkspaceSchemaFactory(
*/
   public boolean accessible(final String userName) throws IOException {
 final FileSystem fs = ImpersonationUtil.createFileSystem(userName, 
fsConf);
+boolean tryListStatus = false;
 try {
-  // We have to rely on the listStatus as a FileSystem can have 
complicated controls such as regular unix style
-  // permissions, Access Control Lists (ACLs) or Access Control 
Expressions (ACE). Hadoop 2.7 version of FileSystem
-  // has a limited private API (FileSystem.access) to check the 
permissions directly
-  // (see https://issues.apache.org/jira/browse/HDFS-6570). Drill 
currently relies on Hadoop 2.5.0 version of
-  // FileClient. TODO: Update this when DRILL-3749 is fixed.
-  fs.listStatus(wsPath);
+  // access API checks if a user has certain permissions on a file or 
directory.
+  // returns normally if requested permissions are granted and throws 
an exception
+  // if access is denied. This API was added in HDFS 2.6 (see 
HDFS-6570).
+  // It is less expensive (than listStatus which was being used 
before) and hides the
+  // complicated access control logic underneath.
+  fs.access(wsPath, FsAction.READ);
 } catch (final UnsupportedOperationException e) {
-  logger.trace("The filesystem for this workspace does not support 
this operation.", e);
+  logger.debug("The filesystem for this workspace does not support 
access operation.", e);
+  tryListStatus = true;
 } catch (final FileNotFoundException | AccessControlException e) {
-  return false;
+  logger.debug("file {} not found or cannot be accessed", 
wsPath.toString(), e);
+  tryListStatus = true;
+}
+
+// if fs.access fails for some reason, fall back to listStatus.
+if (tryListStatus) {
+  try {
+fs.listStatus(wsPath);
+  } catch (final UnsupportedOperationException e) {
+logger.debug("The filesystem for this workspace does not support 
listStatus operation.", e);
+  } catch (final FileNotFoundException | AccessControlException e) {
+logger.debug("file {} not found or cannot be accessed", 
wsPath.toString(), e);
--- End diff --

let's add `using listStatus` in the log statement to differentiate with 
previous case


> Use new HDFS API access instead of listStatus to check if users have 
> permissions to access workspace.
> -
>
> Key: DRILL-4990
> URL: https://issues.apache.org/jira/browse/DRILL-4990
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning & Optimization
>Affects Versions: 1.8.0
>Reporter: Padma Penumarthy
>Assignee: Padma Penumarthy
>
> For every query, we build the schema tree 
> (runSQL->getPlan->getNewDefaultSchema->getRootSchema). All workspaces in all 
> storage plugins are checked and are added to the schema tree if they are 
> accessible by the user who initiated the query.  For file system plugin, 
> listStatus API is used to check if  the workspace is accessible or not 
> (WorkspaceSchemaFactory.accessible) by the user.  The idea seem to be if the 
> user does not have access to file(s) in the workspace, listStatus will 
> generate an exception and we return false. But, listStatus (which lists all 
> the entries of a directory) is an expensive operation when there are large 
> number of files in the directory. A new API is added in Hadoop 2.6 called 
> access (HDFS-6570) which provides the ability to check if the user has 
> permissions on a file/directory.  Use this new API instead of listStatus. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5832) Migrate OperatorFixture to use SystemOptionManager rather than mock

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239895#comment-16239895
 ] 

ASF GitHub Bot commented on DRILL-5832:
---

Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/970
  
Addressed comments. Rebased on master. Resolved merge conflicts. Squashed 
commits.

@ilooner, @sachouche please do a quick final review to check for loose ends.


> Migrate OperatorFixture to use SystemOptionManager rather than mock
> ---
>
> Key: DRILL-5832
> URL: https://issues.apache.org/jira/browse/DRILL-5832
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
>
> The {{OperatorFixture}} provides structure for testing individual operators 
> and other "sub-operator" bits of code. To do that, the framework provides 
> mock network-free and server-free versions of the fragment context and 
> operator context.
> As part of the mock, the {{OperatorFixture}} provides a mock version of the 
> system option manager that provides a simple test-only implementation of an 
> option set.
> With the recent major changes to the system option manager, this mock 
> implementation has drifted out of sync with the system option manager. Rather 
> than upgrading the mock implementation, this ticket asks to use the system 
> option manager directly -- but configured for no ZK or file persistence of 
> options.
> The key reason for this change is that the system option manager has 
> implemented a sophisticated way to handle option defaults; it is better to 
> leverage that than to provide a mock implementation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5842) Refactor and simplify the fragment, operator contexts for testing

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239902#comment-16239902
 ] 

ASF GitHub Bot commented on DRILL-5842:
---

Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/978
  
Rebased and squashed commits.


> Refactor and simplify the fragment, operator contexts for testing
> -
>
> Key: DRILL-5842
> URL: https://issues.apache.org/jira/browse/DRILL-5842
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
>
> Drill's execution engine has a "fragment context" that provides state for a 
> fragment as a whole, and an "operator context" which provides state for a 
> single operator. Historically, these have both been concrete classes that 
> make generous references to the Drillbit context, and hence need a full Drill 
> server in order to operate.
> Drill has historically made extensive use of system-level testing: build the 
> entire server and fire queries at it to test each component. Over time, we 
> are augmenting that approach with unit tests: the ability to test each 
> operator (or parts of an operator) in isolation.
> Since each operator requires access to both the operator and fragment 
> context, the fact that the contexts depend on the overall server creates a 
> large barrier to unit testing. An earlier checkin started down the path of 
> defining the contexts as interfaces that can have different run-time and 
> test-time implementations to enable testing.
> This ticket asks to refactor those interfaces: simplifying the operator 
> context and introducing an interface for the fragment context. New code will 
> use these new interfaces, while older code continues to use the concrete 
> implementations. Over time, as operators are enhanced, they can be modified 
> to allow unit-level testing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5899) Simple pattern matchers can work with DrillBuf directly

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239919#comment-16239919
 ] 

ASF GitHub Bot commented on DRILL-5899:
---

Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/1015
  
Second long list of detailed comments sent privately to keep dev list 
traffic down.


> Simple pattern matchers can work with DrillBuf directly
> ---
>
> Key: DRILL-5899
> URL: https://issues.apache.org/jira/browse/DRILL-5899
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Flow
>Reporter: Padma Penumarthy
>Assignee: Padma Penumarthy
>Priority: Critical
>
> For the 4 simple patterns we have i.e. startsWith, endsWith, contains and 
> constant,, we do not need the overhead of charSequenceWrapper. We can work 
> with DrillBuf directly. This will save us from doing isAscii check and UTF8 
> decoding for each row.
> UTF-8 encoding ensures that no UTF-8 character is a prefix of any other valid 
> character. So, instead of decoding varChar from each row we are processing, 
> encode the patternString once during setup and do raw byte comparison. 
> Instead of bounds checking and reading one byte at a time, we get the whole 
> buffer in one shot and use that for comparison.
> This improved overall performance for filter operator by around 20%. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5922) Intermittent Memory Leaks in the ROOT allocator

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239951#comment-16239951
 ] 

ASF GitHub Bot commented on DRILL-5922:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1023#discussion_r148997365
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/WorkManager.java ---
@@ -158,38 +165,49 @@ public DrillbitContext getContext() {
 return dContext;
   }
 
-  private ExtendedLatch exitLatch = null; // used to wait to exit when 
things are still running
-
   /**
* Waits until it is safe to exit. Blocks until all currently running 
fragments have completed.
-   *
-   * This is intended to be used by {@link 
org.apache.drill.exec.server.Drillbit#close()}.
+   * This is intended to be used by {@link 
org.apache.drill.exec.server.Drillbit#close()}.
*/
   public void waitToExit() {
-synchronized(this) {
-  if (queries.isEmpty() && runningFragments.isEmpty()) {
-return;
+final long startTime = System.currentTimeMillis();
+exitLock.lock();
+
+try {
+  long diff;
+  while ((diff = (System.currentTimeMillis() - startTime)) < 
EXIT_TIMEOUT) {
+if (queries.isEmpty() && runningFragments.isEmpty()) {
+  break;
+}
+
+try {
+  final boolean success = exitCondition.await(EXIT_TIMEOUT - diff, 
TimeUnit.MILLISECONDS);
+
+  if (!success) {
+break;
+  }
+} catch (InterruptedException e) {
+  logger.error("Interrupted while waiting to exit");
+}
   }
 
-  exitLatch = new ExtendedLatch();
-}
+  if (!(queries.isEmpty() && runningFragments.isEmpty())) {
+logger.warn("Timed out after %d millis. Shutting down before all 
fragments and foremen " +
+  "have completed.", EXIT_TIMEOUT);
--- End diff --

The logger has formatting, but does not use `String.format()` formats. 
Instead, it uses `{}`:

```
logger.warn("Timed out after {} millis...", EXIT_TIMEOUT);
```


> Intermittent Memory Leaks in the ROOT allocator  
> -
>
> Key: DRILL-5922
> URL: https://issues.apache.org/jira/browse/DRILL-5922
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Minor
>
> This issue was originall found by [~ben-zvi]. I am able to consistently 
> reproduce the error on my laptop by running the following unit test:
> org.apache.drill.exec.DrillSeparatePlanningTest#testMultiMinorFragmentComplexQuery
> {code}
> java.lang.IllegalStateException: Allocator[ROOT] closed with outstanding 
> child allocators.
> Allocator(ROOT) 0/1048576/10113536/3221225472 (res/actual/peak/limit)
>   child allocators: 1
> Allocator(query:26049b50-0cec-0a92-437c-bbe486e1fcbf) 
> 1048576/0/0/268435456 (res/actual/peak/limit)
>   child allocators: 0
>   ledgers: 0
>   reservations: 0
>   ledgers: 0
>   reservations: 0
>   at 
> org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:496) 
> ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at 
> org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:256)
>  ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:205) 
> [classes/:na]
>   at org.apache.drill.BaseTestQuery.closeClient(BaseTestQuery.java:315) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:157) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:148) 
> [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.getFragmentsHelper(DrillSeparatePlanningTest.java:185)
>  [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.testMultiMinorFragmentComplexQuery(DrillSeparatePlanningTest.java:108)
>  [test-classes/:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_144]

[jira] [Commented] (DRILL-5922) Intermittent Memory Leaks in the ROOT allocator

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239954#comment-16239954
 ] 

ASF GitHub Bot commented on DRILL-5922:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1023#discussion_r148998119
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/WorkManager.java ---
@@ -158,38 +165,49 @@ public DrillbitContext getContext() {
 return dContext;
   }
 
-  private ExtendedLatch exitLatch = null; // used to wait to exit when 
things are still running
-
   /**
* Waits until it is safe to exit. Blocks until all currently running 
fragments have completed.
-   *
-   * This is intended to be used by {@link 
org.apache.drill.exec.server.Drillbit#close()}.
+   * This is intended to be used by {@link 
org.apache.drill.exec.server.Drillbit#close()}.
*/
   public void waitToExit() {
-synchronized(this) {
-  if (queries.isEmpty() && runningFragments.isEmpty()) {
-return;
+final long startTime = System.currentTimeMillis();
+exitLock.lock();
+
+try {
+  long diff;
+  while ((diff = (System.currentTimeMillis() - startTime)) < 
EXIT_TIMEOUT) {
+if (queries.isEmpty() && runningFragments.isEmpty()) {
+  break;
+}
+
+try {
+  final boolean success = exitCondition.await(EXIT_TIMEOUT - diff, 
TimeUnit.MILLISECONDS);
+
+  if (!success) {
+break;
+  }
--- End diff --

```
if (! exitCondition.await(...)) {
  break;
}
```


> Intermittent Memory Leaks in the ROOT allocator  
> -
>
> Key: DRILL-5922
> URL: https://issues.apache.org/jira/browse/DRILL-5922
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Minor
>
> This issue was originall found by [~ben-zvi]. I am able to consistently 
> reproduce the error on my laptop by running the following unit test:
> org.apache.drill.exec.DrillSeparatePlanningTest#testMultiMinorFragmentComplexQuery
> {code}
> java.lang.IllegalStateException: Allocator[ROOT] closed with outstanding 
> child allocators.
> Allocator(ROOT) 0/1048576/10113536/3221225472 (res/actual/peak/limit)
>   child allocators: 1
> Allocator(query:26049b50-0cec-0a92-437c-bbe486e1fcbf) 
> 1048576/0/0/268435456 (res/actual/peak/limit)
>   child allocators: 0
>   ledgers: 0
>   reservations: 0
>   ledgers: 0
>   reservations: 0
>   at 
> org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:496) 
> ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at 
> org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:256)
>  ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:205) 
> [classes/:na]
>   at org.apache.drill.BaseTestQuery.closeClient(BaseTestQuery.java:315) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:157) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:148) 
> [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.getFragmentsHelper(DrillSeparatePlanningTest.java:185)
>  [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.testMultiMinorFragmentComplexQuery(DrillSeparatePlanningTest.java:108)
>  [test-classes/:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_144]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_144]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  [junit-4.11.jar:na]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.11.jar:na]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  [junit-4.11.jar:na]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUn

[jira] [Commented] (DRILL-5922) Intermittent Memory Leaks in the ROOT allocator

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239950#comment-16239950
 ] 

ASF GitHub Bot commented on DRILL-5922:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1023#discussion_r148997788
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/WorkManager.java ---
@@ -59,12 +61,14 @@
 public class WorkManager implements AutoCloseable {
   private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(WorkManager.class);
 
+  public static final long EXIT_TIMEOUT = 3L;
--- End diff --

`3L` --> `30_000L` for quicker reading.


> Intermittent Memory Leaks in the ROOT allocator  
> -
>
> Key: DRILL-5922
> URL: https://issues.apache.org/jira/browse/DRILL-5922
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Minor
>
> This issue was originall found by [~ben-zvi]. I am able to consistently 
> reproduce the error on my laptop by running the following unit test:
> org.apache.drill.exec.DrillSeparatePlanningTest#testMultiMinorFragmentComplexQuery
> {code}
> java.lang.IllegalStateException: Allocator[ROOT] closed with outstanding 
> child allocators.
> Allocator(ROOT) 0/1048576/10113536/3221225472 (res/actual/peak/limit)
>   child allocators: 1
> Allocator(query:26049b50-0cec-0a92-437c-bbe486e1fcbf) 
> 1048576/0/0/268435456 (res/actual/peak/limit)
>   child allocators: 0
>   ledgers: 0
>   reservations: 0
>   ledgers: 0
>   reservations: 0
>   at 
> org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:496) 
> ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at 
> org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:256)
>  ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:205) 
> [classes/:na]
>   at org.apache.drill.BaseTestQuery.closeClient(BaseTestQuery.java:315) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:157) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:148) 
> [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.getFragmentsHelper(DrillSeparatePlanningTest.java:185)
>  [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.testMultiMinorFragmentComplexQuery(DrillSeparatePlanningTest.java:108)
>  [test-classes/:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_144]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_144]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  [junit-4.11.jar:na]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.11.jar:na]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  [junit-4.11.jar:na]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:120)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:65)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29)
>  [jmockit-1.3.jar:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_144]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_144]
>   at 
> mockit.internal.util.MethodReflection.invokeWithCheckedThrows(MethodReflection.java:95)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.internal.annotations.MockMethodBridge.callMock(MockMethodBridge.java:76)
>  [jmockit-1.3.jar:na]
>   

[jira] [Commented] (DRILL-5922) Intermittent Memory Leaks in the ROOT allocator

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239949#comment-16239949
 ] 

ASF GitHub Bot commented on DRILL-5922:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1023#discussion_r148997565
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/foreman/Foreman.java ---
@@ -828,8 +828,13 @@ public void close() throws Exception {
 queryManager.writeFinalProfile(uex);
   }
 
-  // Remove the Foreman from the running query list.
-  bee.retireForeman(Foreman.this);
+  try {
+queryContext.close();
+  } catch (Exception e) {
+final String message = String.format("Unable to close query 
context for query {}",
+  QueryIdHelper.getQueryId(queryId));
+logger.error(message, e);
--- End diff --

```
logger.error("Unable...query {}", QueryIdHelper...);
```

The `{}` is a logger format pattern, not a `String.format()` pattern.


> Intermittent Memory Leaks in the ROOT allocator  
> -
>
> Key: DRILL-5922
> URL: https://issues.apache.org/jira/browse/DRILL-5922
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Minor
>
> This issue was originall found by [~ben-zvi]. I am able to consistently 
> reproduce the error on my laptop by running the following unit test:
> org.apache.drill.exec.DrillSeparatePlanningTest#testMultiMinorFragmentComplexQuery
> {code}
> java.lang.IllegalStateException: Allocator[ROOT] closed with outstanding 
> child allocators.
> Allocator(ROOT) 0/1048576/10113536/3221225472 (res/actual/peak/limit)
>   child allocators: 1
> Allocator(query:26049b50-0cec-0a92-437c-bbe486e1fcbf) 
> 1048576/0/0/268435456 (res/actual/peak/limit)
>   child allocators: 0
>   ledgers: 0
>   reservations: 0
>   ledgers: 0
>   reservations: 0
>   at 
> org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:496) 
> ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at 
> org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:256)
>  ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:205) 
> [classes/:na]
>   at org.apache.drill.BaseTestQuery.closeClient(BaseTestQuery.java:315) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:157) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:148) 
> [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.getFragmentsHelper(DrillSeparatePlanningTest.java:185)
>  [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.testMultiMinorFragmentComplexQuery(DrillSeparatePlanningTest.java:108)
>  [test-classes/:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_144]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_144]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  [junit-4.11.jar:na]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.11.jar:na]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  [junit-4.11.jar:na]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:120)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:65)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29)
>  [jmockit-1.3.jar:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMetho

[jira] [Commented] (DRILL-5922) Intermittent Memory Leaks in the ROOT allocator

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239953#comment-16239953
 ] 

ASF GitHub Bot commented on DRILL-5922:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1023#discussion_r148997959
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/WorkManager.java ---
@@ -158,38 +165,49 @@ public DrillbitContext getContext() {
 return dContext;
   }
 
-  private ExtendedLatch exitLatch = null; // used to wait to exit when 
things are still running
-
   /**
* Waits until it is safe to exit. Blocks until all currently running 
fragments have completed.
-   *
-   * This is intended to be used by {@link 
org.apache.drill.exec.server.Drillbit#close()}.
+   * This is intended to be used by {@link 
org.apache.drill.exec.server.Drillbit#close()}.
*/
   public void waitToExit() {
-synchronized(this) {
-  if (queries.isEmpty() && runningFragments.isEmpty()) {
-return;
+final long startTime = System.currentTimeMillis();
+exitLock.lock();
+
+try {
+  long diff;
+  while ((diff = (System.currentTimeMillis() - startTime)) < 
EXIT_TIMEOUT) {
--- End diff --

Simpler:
```
final long endTime = System.currentTimeMillis() + EXIT_TIMEOUT;
...
while (System.currentTimeMillis() < endTime) {
...
```


> Intermittent Memory Leaks in the ROOT allocator  
> -
>
> Key: DRILL-5922
> URL: https://issues.apache.org/jira/browse/DRILL-5922
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Minor
>
> This issue was originall found by [~ben-zvi]. I am able to consistently 
> reproduce the error on my laptop by running the following unit test:
> org.apache.drill.exec.DrillSeparatePlanningTest#testMultiMinorFragmentComplexQuery
> {code}
> java.lang.IllegalStateException: Allocator[ROOT] closed with outstanding 
> child allocators.
> Allocator(ROOT) 0/1048576/10113536/3221225472 (res/actual/peak/limit)
>   child allocators: 1
> Allocator(query:26049b50-0cec-0a92-437c-bbe486e1fcbf) 
> 1048576/0/0/268435456 (res/actual/peak/limit)
>   child allocators: 0
>   ledgers: 0
>   reservations: 0
>   ledgers: 0
>   reservations: 0
>   at 
> org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:496) 
> ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at 
> org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:256)
>  ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:205) 
> [classes/:na]
>   at org.apache.drill.BaseTestQuery.closeClient(BaseTestQuery.java:315) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:157) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:148) 
> [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.getFragmentsHelper(DrillSeparatePlanningTest.java:185)
>  [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.testMultiMinorFragmentComplexQuery(DrillSeparatePlanningTest.java:108)
>  [test-classes/:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_144]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_144]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  [junit-4.11.jar:na]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.11.jar:na]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  [junit-4.11.jar:na]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:120)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:65)
>  [jmockit-1.3.jar:na]
>   at 
>

[jira] [Commented] (DRILL-5922) Intermittent Memory Leaks in the ROOT allocator

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239952#comment-16239952
 ] 

ASF GitHub Bot commented on DRILL-5922:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1023#discussion_r148997637
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/user/PlanSplitter.java 
---
@@ -79,6 +80,15 @@ public QueryPlanFragments planFragments(DrillbitContext 
dContext, QueryId queryI
   responseBuilder.setStatus(QueryState.FAILED);
   responseBuilder.setError(error);
 }
+
+try {
+  queryContext.close();
+} catch (Exception e) {
+  final String message = String.format("Error closing QueryContext 
when getting plan fragments for query %s",
+QueryIdHelper.getQueryId(queryId));
+  logger.error(message, e);
--- End diff --

See above re log message formatting.


> Intermittent Memory Leaks in the ROOT allocator  
> -
>
> Key: DRILL-5922
> URL: https://issues.apache.org/jira/browse/DRILL-5922
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Minor
>
> This issue was originall found by [~ben-zvi]. I am able to consistently 
> reproduce the error on my laptop by running the following unit test:
> org.apache.drill.exec.DrillSeparatePlanningTest#testMultiMinorFragmentComplexQuery
> {code}
> java.lang.IllegalStateException: Allocator[ROOT] closed with outstanding 
> child allocators.
> Allocator(ROOT) 0/1048576/10113536/3221225472 (res/actual/peak/limit)
>   child allocators: 1
> Allocator(query:26049b50-0cec-0a92-437c-bbe486e1fcbf) 
> 1048576/0/0/268435456 (res/actual/peak/limit)
>   child allocators: 0
>   ledgers: 0
>   reservations: 0
>   ledgers: 0
>   reservations: 0
>   at 
> org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:496) 
> ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at 
> org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:256)
>  ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:205) 
> [classes/:na]
>   at org.apache.drill.BaseTestQuery.closeClient(BaseTestQuery.java:315) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:157) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:148) 
> [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.getFragmentsHelper(DrillSeparatePlanningTest.java:185)
>  [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.testMultiMinorFragmentComplexQuery(DrillSeparatePlanningTest.java:108)
>  [test-classes/:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_144]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_144]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  [junit-4.11.jar:na]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.11.jar:na]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  [junit-4.11.jar:na]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:120)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:65)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29)
>  [jmockit-1.3.jar:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_144]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.

[jira] [Commented] (DRILL-5872) Deserialization of profile JSON fails due to totalCost being reported as "NaN"

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239955#comment-16239955
 ] 

ASF GitHub Bot commented on DRILL-5872:
---

Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/990
  
Fixed in planner, so closing this.


> Deserialization of profile JSON fails due to totalCost being reported as "NaN"
> --
>
> Key: DRILL-5872
> URL: https://issues.apache.org/jira/browse/DRILL-5872
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Kunal Khatua
>Assignee: Paul Rogers
>Priority: Blocker
> Fix For: 1.12.0
>
>
> With DRILL-5716 , there is a change in the protobuf that introduces a new 
> attribute in the JSON document that Drill uses to interpret and render the 
> profile's details. 
> The totalCost attribute, used as a part of showing the query cost (to 
> understand how it was assign to small/large queue), sometimes returns a 
> non-numeric text value {{"NaN"}}. 
> This breaks the UI with the messages:
> {code}
> Failed to get profiles:
> unable to deserialize value at key 2620698f-295e-f8d3-3ab7-01792b0f2669
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5872) Deserialization of profile JSON fails due to totalCost being reported as "NaN"

2017-11-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239956#comment-16239956
 ] 

ASF GitHub Bot commented on DRILL-5872:
---

Github user paul-rogers closed the pull request at:

https://github.com/apache/drill/pull/990


> Deserialization of profile JSON fails due to totalCost being reported as "NaN"
> --
>
> Key: DRILL-5872
> URL: https://issues.apache.org/jira/browse/DRILL-5872
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Kunal Khatua
>Assignee: Paul Rogers
>Priority: Blocker
> Fix For: 1.12.0
>
>
> With DRILL-5716 , there is a change in the protobuf that introduces a new 
> attribute in the JSON document that Drill uses to interpret and render the 
> profile's details. 
> The totalCost attribute, used as a part of showing the query cost (to 
> understand how it was assign to small/large queue), sometimes returns a 
> non-numeric text value {{"NaN"}}. 
> This breaks the UI with the messages:
> {code}
> Failed to get profiles:
> unable to deserialize value at key 2620698f-295e-f8d3-3ab7-01792b0f2669
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)