[jira] [Commented] (DRILL-5878) TableNotFound exception is being reported for a wrong storage plugin.

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16226265#comment-16226265
 ] 

ASF GitHub Bot commented on DRILL-5878:
---

Github user HanumathRao commented on a diff in the pull request:

https://github.com/apache/drill/pull/996#discussion_r147896927
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/store/dfs/TestFileSelection.java
 ---
@@ -63,4 +63,17 @@ public void testEmptyFolderThrowsTableNotFound() throws 
Exception {
 }
   }
 
--- End diff --

done


> TableNotFound exception is being reported for a wrong storage plugin.
> -
>
> Key: DRILL-5878
> URL: https://issues.apache.org/jira/browse/DRILL-5878
> Project: Apache Drill
>  Issue Type: Bug
>  Components: SQL Parser
>Affects Versions: 1.11.0
>Reporter: Hanumath Rao Maduri
>Assignee: Hanumath Rao Maduri
>Priority: Minor
> Fix For: 1.12.0
>
>
> Drill is reporting TableNotFound exception for a wrong storage plugin. 
> Consider the following query where employee.json is queried using cp plugin.
> {code}
> 0: jdbc:drill:zk=local> select * from cp.`employee.json` limit 10;
> +--++-++--+-+---++-++--++---+-+-++
> | employee_id  | full_name  | first_name  | last_name  | position_id  
> | position_title  | store_id  | department_id  | birth_date  |   
> hire_date|  salary  | supervisor_id  |  education_level  | 
> marital_status  | gender  |  management_role   |
> +--++-++--+-+---++-++--++---+-+-++
> | 1| Sheri Nowmer   | Sheri   | Nowmer | 1
> | President   | 0 | 1  | 1961-08-26  | 
> 1994-12-01 00:00:00.0  | 8.0  | 0  | Graduate Degree   | S
>| F   | Senior Management  |
> | 2| Derrick Whelply| Derrick | Whelply| 2
> | VP Country Manager  | 0 | 1  | 1915-07-03  | 
> 1994-12-01 00:00:00.0  | 4.0  | 1  | Graduate Degree   | M
>| M   | Senior Management  |
> | 4| Michael Spence | Michael | Spence | 2
> | VP Country Manager  | 0 | 1  | 1969-06-20  | 
> 1998-01-01 00:00:00.0  | 4.0  | 1  | Graduate Degree   | S
>| M   | Senior Management  |
> | 5| Maya Gutierrez | Maya| Gutierrez  | 2
> | VP Country Manager  | 0 | 1  | 1951-05-10  | 
> 1998-01-01 00:00:00.0  | 35000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 6| Roberta Damstra| Roberta | Damstra| 3
> | VP Information Systems  | 0 | 2  | 1942-10-08  | 
> 1994-12-01 00:00:00.0  | 25000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 7| Rebecca Kanagaki   | Rebecca | Kanagaki   | 4
> | VP Human Resources  | 0 | 3  | 1949-03-27  | 
> 1994-12-01 00:00:00.0  | 15000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 8| Kim Brunner| Kim | Brunner| 11   
> | Store Manager   | 9 | 11 | 1922-08-10  | 
> 1998-01-01 00:00:00.0  | 1.0  | 5  | Bachelors Degree  | S
>| F   | Store Management   |
> | 9| Brenda Blumberg| Brenda  | Blumberg   | 11   
> | Store Manager   | 21| 11 | 1979-06-23  | 
> 1998-01-01 00:00:00.0  | 17000.0  | 5  | Graduate Degree   | M
>| F   | Store Management   |
> | 10   | Darren Stanz   | Darren  | Stanz  | 5
> | VP Finance  | 0 | 5  | 1949-08-26  | 
> 1994-12-01 00:00:00.0  | 5.0  | 1  | Partial College   | M
>| M   | Senior Management  |
> | 11   | Jonathan Murraiin  | Jonathan| Murraiin   | 11   
> | Store Manager   | 1 | 11 | 1967-06-20  | 
> 1998-01-01 00:00:00.0  | 15000.0  | 5  | Graduate Degree   | S
>

[jira] [Updated] (DRILL-5915) Streaming aggregate with limit query does not return

2017-10-30 Thread Boaz Ben-Zvi (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boaz Ben-Zvi updated DRILL-5915:

Description: 
Reading a 1M rows table, in embedded mode, using sort+streaming_aggr , followed 
by a LIMIT -- the work is complete, but the query does not return (see attached 
profile)

{code}
alter session set `planner.enable_hashagg` = false;

select b.g, b.s from (select gby_int32, gby_date g, gby_int32_rand, 
sum(int32_field) s from dfs.`/data/PARQUET-1M.parquet` group by gby_int32, 
gby_date, gby_int32_rand) b limit 30;
{code}

Without the LIMIT clause the query runs, completes and returns OK. 

  was:
Reading a 1M rows table, in embedded mode, using sort+streaming_aggr -- the 
work is complete, but the query does not return (see attached profile)

{code}
alter session set `planner.enable_hashagg` = false;

select b.g, b.s from (select gby_int32, gby_date g, gby_int32_rand, 
sum(int32_field) s from dfs.`/data/PARQUET-1M.parquet` group by gby_int32, 
gby_date, gby_int32_rand) b limit 30;
{code}




> Streaming aggregate with limit query does not return
> 
>
> Key: DRILL-5915
> URL: https://issues.apache.org/jira/browse/DRILL-5915
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Boaz Ben-Zvi
> Attachments: 26080f78-3fe8-7f2e-9815-9a2ae1a4bc90.sys.drill
>
>
> Reading a 1M rows table, in embedded mode, using sort+streaming_aggr , 
> followed by a LIMIT -- the work is complete, but the query does not return 
> (see attached profile)
> {code}
> alter session set `planner.enable_hashagg` = false;
> select b.g, b.s from (select gby_int32, gby_date g, gby_int32_rand, 
> sum(int32_field) s from dfs.`/data/PARQUET-1M.parquet` group by gby_int32, 
> gby_date, gby_int32_rand) b limit 30;
> {code}
> Without the LIMIT clause the query runs, completes and returns OK. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5915) Streaming aggregate with limit query does not return

2017-10-30 Thread Boaz Ben-Zvi (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boaz Ben-Zvi updated DRILL-5915:

Attachment: 26080f78-3fe8-7f2e-9815-9a2ae1a4bc90.sys.drill

> Streaming aggregate with limit query does not return
> 
>
> Key: DRILL-5915
> URL: https://issues.apache.org/jira/browse/DRILL-5915
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Boaz Ben-Zvi
> Attachments: 26080f78-3fe8-7f2e-9815-9a2ae1a4bc90.sys.drill
>
>
> Reading a 1M rows table, in embedded mode, using sort+streaming_aggr -- the 
> work is complete, but the query does not return (see attached profile)
> {code}
> alter session set `planner.enable_hashagg` = false;
> select b.g, b.s from (select gby_int32, gby_date g, gby_int32_rand, 
> sum(int32_field) s from dfs.`/data/PARQUET-1M.parquet` group by gby_int32, 
> gby_date, gby_int32_rand) b limit 30;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5915) Streaming aggregate with limit query does not return

2017-10-30 Thread Boaz Ben-Zvi (JIRA)
Boaz Ben-Zvi created DRILL-5915:
---

 Summary: Streaming aggregate with limit query does not return
 Key: DRILL-5915
 URL: https://issues.apache.org/jira/browse/DRILL-5915
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Relational Operators
Affects Versions: 1.11.0
Reporter: Boaz Ben-Zvi


Reading a 1M rows table, in embedded mode, using sort+streaming_aggr -- the 
work is complete, but the query does not return (see attached profile)

{code}
alter session set `planner.enable_hashagg` = false;

select b.g, b.s from (select gby_int32, gby_date g, gby_int32_rand, 
sum(int32_field) s from dfs.`/data/PARQUET-1M.parquet` group by gby_int32, 
gby_date, gby_int32_rand) b limit 30;
{code}





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5914) CSV (text) reader fails to parse quoted newlines in trailing fields

2017-10-30 Thread Paul Rogers (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Rogers updated DRILL-5914:
---
Description: 
Consider the existing `TestCsvHeader.testCountOnCsvWithHeader()` unit test. The 
input file is as follows:

{noformat}
Year,Make,Model,Description,Price
1997,Ford,E350,"ac, abs, moon",3000.00
1999,Chevy,"Venture ""Extended Edition""","",4900.00
1999,Chevy,"Venture ""Extended Edition, Very Large""",,5000.00
1996,Jeep,Grand Cherokee,"MUST SELL!
air, moon roof, loaded",4799.00
{noformat}

Note the newline in side the description in the last record.

If we do a `SELECT *` query, the file is parsed fine; we get 4 records.

If we do a `SELECT Year, Model` query, the CSV reader uses a special trick: it 
short-circuits reads on the three columns that are not wanted:

{code}
TextReader.parseRecord() {
...
if (earlyTerm) {
  if (ch != newLine) {
input.skipLines(1); // <-- skip lines
  }
  break;
}
{code}

This method skips forward in the file, discarding characters until it hits a 
newline:

{code}
  do {
nextChar();
  } while (lineCount < expectedLineCount);
{code}

Note that this code handles individual characters, it is not aware of per-field 
semantics. That is, unlike the higher-level parser methods, the `nextChar()` 
method does not consider newlines inside of quoted fields to be special.

This problem shows up acutely in a `SELECT COUNT\(*)` style query that skips 
all fields; the result is we count the input as five lines, not four.

  was:
Consider the existing `TestCsvHeader.testCountOnCsvWithHeader()` unit test. The 
input file is as follows:

{noformat}
Year,Make,Model,Description,Price
1997,Ford,E350,"ac, abs, moon",3000.00
1999,Chevy,"Venture ""Extended Edition""","",4900.00
1999,Chevy,"Venture ""Extended Edition, Very Large""",,5000.00
1996,Jeep,Grand Cherokee,"MUST SELL!
air, moon roof, loaded",4799.00
{noformat}

Note the newline in side the description in the last record.

If we do a `SELECT *` query, the file is parsed fine; we get 4 records.

If we do a `SELECT Year, Model` query, the CSV reader uses a special trick: it 
short-circuits reads on the three columns that are not wanted:

{code}
TextReader.parseRecord() {
...
if (earlyTerm) {
  if (ch != newLine) {
input.skipLines(1); // <-- skip lines
  }
  break;
}
{code}

This method skips forward in the file, discarding characters until it hits a 
newline:

{code}
  do {
nextChar();
  } while (lineCount < expectedLineCount);
{code}

Note that this code handles individual characters, it is not aware of per-field 
semantics. That is, unlike the higher-level parser methods, the `nextChar()` 
method does not consider newlines inside of quoted fields to be special.

This problem shows up acutely in a `SELECT COUNT(*)` style query that skips all 
fields; the result is we count the input as five lines, not four.


> CSV (text) reader fails to parse quoted newlines in trailing fields
> ---
>
> Key: DRILL-5914
> URL: https://issues.apache.org/jira/browse/DRILL-5914
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>
> Consider the existing `TestCsvHeader.testCountOnCsvWithHeader()` unit test. 
> The input file is as follows:
> {noformat}
> Year,Make,Model,Description,Price
> 1997,Ford,E350,"ac, abs, moon",3000.00
> 1999,Chevy,"Venture ""Extended Edition""","",4900.00
> 1999,Chevy,"Venture ""Extended Edition, Very Large""",,5000.00
> 1996,Jeep,Grand Cherokee,"MUST SELL!
> air, moon roof, loaded",4799.00
> {noformat}
> Note the newline in side the description in the last record.
> If we do a `SELECT *` query, the file is parsed fine; we get 4 records.
> If we do a `SELECT Year, Model` query, the CSV reader uses a special trick: 
> it short-circuits reads on the three columns that are not wanted:
> {code}
> TextReader.parseRecord() {
> ...
> if (earlyTerm) {
>   if (ch != newLine) {
> input.skipLines(1); // <-- skip lines
>   }
>   break;
> }
> {code}
> This method skips forward in the file, discarding characters until it hits a 
> newline:
> {code}
>   do {
> nextChar();
>   } while (lineCount < expectedLineCount);
> {code}
> Note that this code handles individual characters, it is not aware of 
> per-field semantics. That is, unlike the higher-level parser methods, the 
> `nextChar()` method does not consider newlines inside of quoted fields to be 
> special.
> This problem shows up acutely in a `SELECT COUNT\(*)` style query that skips 
> all fields; the result is we count the input as five lines, not four.



--
This message was sent by 

[jira] [Updated] (DRILL-5914) CSV (text) reader fails to parse quoted newlines in trailing fields

2017-10-30 Thread Paul Rogers (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Rogers updated DRILL-5914:
---
Description: 
Consider the existing `TestCsvHeader.testCountOnCsvWithHeader()` unit test. The 
input file is as follows:

{noformat}
Year,Make,Model,Description,Price
1997,Ford,E350,"ac, abs, moon",3000.00
1999,Chevy,"Venture ""Extended Edition""","",4900.00
1999,Chevy,"Venture ""Extended Edition, Very Large""",,5000.00
1996,Jeep,Grand Cherokee,"MUST SELL!
air, moon roof, loaded",4799.00
{noformat}

Note the newline in side the description in the last record.

If we do a `SELECT *` query, the file is parsed fine; we get 4 records.

If we do a `SELECT Year, Model` query, the CSV reader uses a special trick: it 
short-circuits reads on the three columns that are not wanted:

{code}
TextReader.parseRecord() {
...
if (earlyTerm) {
  if (ch != newLine) {
input.skipLines(1); // <-- skip lines
  }
  break;
}
{code}

This method skips forward in the file, discarding characters until it hits a 
newline:

{code}
  do {
nextChar();
  } while (lineCount < expectedLineCount);
{code}

Note that this code handles individual characters, it is not aware of per-field 
semantics. That is, unlike the higher-level parser methods, the `nextChar()` 
method does not consider newlines inside of quoted fields to be special.

This problem shows up acutely in a `SELECT COUNT(*)` style query that skips all 
fields; the result is we count the input as five lines, not four.

  was:
Consider the existing `TestCsvHeader.testCountOnCsvWithHeader()` unit test. The 
input file is as follows:

```
Year,Make,Model,Description,Price
1997,Ford,E350,"ac, abs, moon",3000.00
1999,Chevy,"Venture ""Extended Edition""","",4900.00
1999,Chevy,"Venture ""Extended Edition, Very Large""",,5000.00
1996,Jeep,Grand Cherokee,"MUST SELL!
air, moon roof, loaded",4799.00
```

Note the newline in side the description in the last record.

If we do a `SELECT *` query, the file is parsed fine; we get 4 records.

If we do a `SELECT Year, Model` query, the CSV reader uses a special trick: it 
short-circuits reads on the three columns that are not wanted:

```
TextReader.parseRecord() {
...
if (earlyTerm) {
  if (ch != newLine) {
input.skipLines(1); // <-- skip lines
  }
  break;
}
```

This method skips forward in the file, discarding characters until it hits a 
newline:

```
  do {
nextChar();
  } while (lineCount < expectedLineCount);
```

Note that this code handles individual characters, it is not aware of per-field 
semantics. That is, unlike the higher-level parser methods, the `nextChar()` 
method does not consider newlines inside of quoted fields to be special.

This problem shows up acutely in a `SELECT COUNT(*)` style query that skips all 
fields; the result is we count the input as five lines, not four.


> CSV (text) reader fails to parse quoted newlines in trailing fields
> ---
>
> Key: DRILL-5914
> URL: https://issues.apache.org/jira/browse/DRILL-5914
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>
> Consider the existing `TestCsvHeader.testCountOnCsvWithHeader()` unit test. 
> The input file is as follows:
> {noformat}
> Year,Make,Model,Description,Price
> 1997,Ford,E350,"ac, abs, moon",3000.00
> 1999,Chevy,"Venture ""Extended Edition""","",4900.00
> 1999,Chevy,"Venture ""Extended Edition, Very Large""",,5000.00
> 1996,Jeep,Grand Cherokee,"MUST SELL!
> air, moon roof, loaded",4799.00
> {noformat}
> Note the newline in side the description in the last record.
> If we do a `SELECT *` query, the file is parsed fine; we get 4 records.
> If we do a `SELECT Year, Model` query, the CSV reader uses a special trick: 
> it short-circuits reads on the three columns that are not wanted:
> {code}
> TextReader.parseRecord() {
> ...
> if (earlyTerm) {
>   if (ch != newLine) {
> input.skipLines(1); // <-- skip lines
>   }
>   break;
> }
> {code}
> This method skips forward in the file, discarding characters until it hits a 
> newline:
> {code}
>   do {
> nextChar();
>   } while (lineCount < expectedLineCount);
> {code}
> Note that this code handles individual characters, it is not aware of 
> per-field semantics. That is, unlike the higher-level parser methods, the 
> `nextChar()` method does not consider newlines inside of quoted fields to be 
> special.
> This problem shows up acutely in a `SELECT COUNT(*)` style query that skips 
> all fields; the result is we count the input as five lines, not four.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5914) CSV (text) reader fails to parse quoted newlines in trailing fields

2017-10-30 Thread Paul Rogers (JIRA)
Paul Rogers created DRILL-5914:
--

 Summary: CSV (text) reader fails to parse quoted newlines in 
trailing fields
 Key: DRILL-5914
 URL: https://issues.apache.org/jira/browse/DRILL-5914
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.11.0
Reporter: Paul Rogers
Assignee: Paul Rogers


Consider the existing `TestCsvHeader.testCountOnCsvWithHeader()` unit test. The 
input file is as follows:

```
Year,Make,Model,Description,Price
1997,Ford,E350,"ac, abs, moon",3000.00
1999,Chevy,"Venture ""Extended Edition""","",4900.00
1999,Chevy,"Venture ""Extended Edition, Very Large""",,5000.00
1996,Jeep,Grand Cherokee,"MUST SELL!
air, moon roof, loaded",4799.00
```

Note the newline in side the description in the last record.

If we do a `SELECT *` query, the file is parsed fine; we get 4 records.

If we do a `SELECT Year, Model` query, the CSV reader uses a special trick: it 
short-circuits reads on the three columns that are not wanted:

```
TextReader.parseRecord() {
...
if (earlyTerm) {
  if (ch != newLine) {
input.skipLines(1); // <-- skip lines
  }
  break;
}
```

This method skips forward in the file, discarding characters until it hits a 
newline:

```
  do {
nextChar();
  } while (lineCount < expectedLineCount);
```

Note that this code handles individual characters, it is not aware of per-field 
semantics. That is, unlike the higher-level parser methods, the `nextChar()` 
method does not consider newlines inside of quoted fields to be special.

This problem shows up acutely in a `SELECT COUNT(*)` style query that skips all 
fields; the result is we count the input as five lines, not four.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5427) SQL Execution Syntax incorrect for Sybase RDBMS

2017-10-30 Thread David Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225849#comment-16225849
 ] 

David Lee commented on DRILL-5427:
--

Here's a real example include the Drill query plan.

Table Setup:

use tempdb
go

create table my_table
(column_a int, column_b varchar(20))
go

insert my_table values (1, 'abc')
insert my_table values (2, 'xyz')
go

All the following statements work in Native Transact-SQL:

select * from my_table
go

select * from tempdb..my_table
go

select * from tempdb.guest.my_table
go

 column_acolumn_b
 --- 
   1 abc
   2 xyz

Here's what happens if you run this in Drill: The JDBC SQL executed does not 
match any of the Transact-SQL statements above.

select * from Sybase.tempdb.my_table

00-00Screen : rowType = RecordType(INTEGER column_a, VARCHAR(20) column_b): 
rowcount = 100.0, cumulative cost = {110.0 rows, 110.0 cpu, 0.0 io, 0.0 
network, 0.0 memory}, id = 2041
00-01  Project(column_a=[$0], column_b=[$1]) : rowType = RecordType(INTEGER 
column_a, VARCHAR(20) column_b): rowcount = 100.0, cumulative cost = {100.0 
rows, 100.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 2040
00-02Jdbc(sql=[SELECT *
FROM "tempdb"."my_table"]) : rowType = RecordType(INTEGER column_a, VARCHAR(20) 
column_b): rowcount = 100.0, cumulative cost = {100.0 rows, 100.0 cpu, 0.0 io, 
0.0 network, 0.0 memory}, id = 2007

This is not a valid SQL statement in Sybase:

SELECT * FROM "tempdb"."my_table"

You need to either a) omit the schema, b) add an extra "." for the schema owner 
or c) add the schema owner which is "guest" for tempdb.

> SQL Execution Syntax incorrect for Sybase RDBMS
> ---
>
> Key: DRILL-5427
> URL: https://issues.apache.org/jira/browse/DRILL-5427
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC, SQL Parser, Storage - JDBC
>Affects Versions: 1.10.0
> Environment: Windows Linux
>Reporter: David Lee
> Fix For: Future
>
>
> The Sybase table syntax should be "database"."owner"."tablename", but "owner" 
> is not being added which produces incorrect SQL when executed.
> The default owner is "DBO" for most tables. If DBO is omitted then the syntax 
> should be just "database".."tablename" 
> When SYBASE sees  "database".."tablename" it:
> A. Checks if there is a tablename owned by the logged in user.
> B. If there is no tablename owned by the logged in user it uses DBO by default
> This is what I'm seeing using a JDBC plug-in connection to Sybase ASE:
> The following statements work fine:
> A. show schemas
> SCHEMA_NAME
> cp.default
> dfs.default
> dfs.root
> dfs.tmp
> INFORMATION_SCHEMA
> SYB1U
> SYB1U.tempdb
> B. use SYB1U.tempdb
> oksummary
> true  Default schema changed to [SYB1U.tempdb]
> C. show tables
> TABLE_SCHEMA  TABLE_NAME
> SYB1U.tempdb  sysalternates
> SYB1U.tempdb  sysattributes
> SYB1U.tempdb  syscolumns
> SYB1U.tempdb  syscomments
> SYB1U.tempdb  sysconstraints
> etc.. etc.. etc..
> D. SELECT * FROM INFORMATION_SCHEMA.`COLUMNS`
> where TABLE_SCHEMA = 'SYB1U.tempdb'
> and TABLE_NAME = 'syscolumns'
> TABLE_CATALOG TABLE_SCHEMATABLE_NAME  COLUMN_NAME 
> ORDINAL_POSITIONCOLUMN_DEFAULT  IS_NULLABLE DATA_TYPE
> DRILL SYB1U.tempdbsyscolumns  id  1   (null)  NO  INTEGER
> DRILL SYB1U.tempdbsyscolumns  number  2   (null)  NO  SMALLINT
> DRILL SYB1U.tempdbsyscolumns  colid   3   (null)  NO  SMALLINT
> DRILL SYB1U.tempdbsyscolumns  status  4   (null)  NO  TINYINT
> DRILL SYB1U.tempdbsyscolumns  type5   (null)  NO  TINYINT
> DRILL SYB1U.tempdbsyscolumns  length  6   (null)  NO  INTEGER
> DRILL SYB1U.tempdbsyscolumns  offset  7   (null)  NO  SMALLINT
> DRILL SYB1U.tempdbsyscolumns  usertype8   (null)  NO  
> SMALLINT
> DRILL SYB1U.tempdbsyscolumns  cdefault9   (null)  NO  
> INTEGER
> DRILL SYB1U.tempdbsyscolumns  domain  10  (null)  NO  INTEGER
> etc.. etc.. etc..
> However, the following statements fail:
> A. select * from SYB1U.tempdb.syscolumns
> DATA_READ ERROR: The JDBC storage plugin failed while trying setup the SQL 
> query. 
> sql SELECT *
> FROM "tempdb"."syscolumns"
> plugin SYB1U
> Fragment 0:0
> B. select * from SYB1U.tempdb.dbo.syscolumns
> VALIDATION ERROR: From line 1, column 15 to line 1, column 19: Table 
> 'SYB1U.tempdb.dbo.syscolumns' not found
> C. select * from SYB1U.tempdb..syscolumns
> PARSE ERROR: Encountered ".." at line 1, column 27.
> In A, the execution engine doesn't include the "owner" portion.
> In B, adding dbo fails validation
> In C, the default behavior in Sybase for ".." isn't recognized
> I'm not sure if this is a Drill issue or a 

[jira] [Commented] (DRILL-5582) [Threat Modeling] Drillbit may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Drillbit

2017-10-30 Thread Sorabh Hamirwasia (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225828#comment-16225828
 ] 

Sorabh Hamirwasia commented on DRILL-5582:
--

[~laurentgo] - This change exposed the issue with backward compatibility, In 
absence of this change the old server (1.10) was treating new client (1.11) as 
1.9 client and doing the authentication successfully which is not a correct 
behavior. Reason being with 1.11 we added a new value for SASL_SUPPORT field 
(SASL_PRIVACY) in Handshake message which is not known to the 1.10 server hence 
when it deserialize that field value it is evaluated to default one ([Based on 
this 
post|http://androiddevblog.com/protocol-buffers-pitfall-adding-enum-values/]) 
which is UNKNOWN_SASL_SUPPORT. [Since 1.10 
UserServer|https://github.com/apache/drill/blob/1.10.0/exec/java-exec/src/main/java/org/apache/drill/exec/rpc/user/UserServer.java#L297]
 treats the presence of that default value as an indication of older clients 
which is not correct. We should treat absence of the entire field SASL_SUPPORT 
only to determine if client is old or not. Presence of the default value should 
indicate client still support's SASL but at what level is not known to this 
server. 

With this approach: 
* clients <= 1.9 will not set SASL_SUPPORT in handshake message and server 
should detect it correctly and authenticate the way it was supported in 1.9
* clients > 1.9 will set SASL_SUPPORT field with their supported value and then 
server on same or different will deserialize the value to be either 
UNKNOWN_SASL_SUPPORT or one of SASL_AUTH/SASL_PRIVACY. In both cases server 
should still think client supports SASL and speak it's protocol to require SASL 
handshake.

I will start a thread in Drill-Dev group.

As far as thread model is concerned, it won't affect much in case of plain 
authentication but in cases like Kerberos or mechanisms which does mutual 
authentication, MITM Drillbit won't be able to fake the authentication. But 
without this change MITM Drillbit can make client to skip authentication 
totally.

> [Threat Modeling] Drillbit may be spoofed by an attacker and this may lead to 
> data being written to the attacker's target instead of Drillbit
> -
>
> Key: DRILL-5582
> URL: https://issues.apache.org/jira/browse/DRILL-5582
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Rob Wu
>Assignee: Sorabh Hamirwasia
>Priority: Minor
>  Labels: doc-impacting
> Fix For: 1.12.0
>
>
> *Consider the scenario:*
> Alice has a drillbit (my.drillbit.co) with plain and kerberos authentication 
> enabled containing important data. Bob, the attacker, attempts to spoof the 
> connection and redirect it to his own drillbit (fake.drillbit.co) with no 
> authentication setup. 
> When Alice is under attack and attempts to connect to her secure drillbit, 
> she is actually authenticating against Bob's drillbit. At this point, the 
> connection should have failed due to unmatched configuration. However, the 
> current implementation will return SUCCESS as long as the (spoofing) drillbit 
> has no authentication requirement set.
> Currently, the drillbit <-  to  -> drill client connection accepts the lowest 
> authentication configuration set on the server. This leaves unsuspecting user 
> vulnerable to spoofing. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5797) Use more often the new parquet reader

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225786#comment-16225786
 ] 

ASF GitHub Bot commented on DRILL-5797:
---

Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/976
  
@dprofeta, tried to commit this PR, but ran into multiple functional test 
failures:

```
Execution Failures:

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex12.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex8.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex56.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex274.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex7.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex57.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex102.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex5.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex10.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex9.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex203.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex101.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex275.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex6.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex205.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex11.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex58.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex153.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex202.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex151.q
```

The common failure stack trace seems to be:

```

org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.handleException():272

org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.setup():256
org.apache.drill.exec.physical.impl.ScanBatch.getNextReaderIfHas():241
org.apache.drill.exec.physical.impl.ScanBatch.next():167
...
``` 


> Use more often the new parquet reader
> -
>
> Key: DRILL-5797
> URL: https://issues.apache.org/jira/browse/DRILL-5797
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Parquet
>Reporter: Damien Profeta
>Assignee: Damien Profeta
> Fix For: 1.12.0
>
>
> The choice of using the regular parquet reader of the optimized one is based 
> of what type of columns is in the file. But the columns that are read by the 
> query doesn't matter. We can increase a little bit the cases where the 
> optimized reader is used by checking is the projected column are simple or 
> not.
> This is an optimization waiting for the fast parquet reader to handle complex 
> structure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5797) Use more often the new parquet reader

2017-10-30 Thread Paul Rogers (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Rogers updated DRILL-5797:
---
Labels:   (was: ready-to-commit)

> Use more often the new parquet reader
> -
>
> Key: DRILL-5797
> URL: https://issues.apache.org/jira/browse/DRILL-5797
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Parquet
>Reporter: Damien Profeta
>Assignee: Damien Profeta
> Fix For: 1.12.0
>
>
> The choice of using the regular parquet reader of the optimized one is based 
> of what type of columns is in the file. But the columns that are read by the 
> query doesn't matter. We can increase a little bit the cases where the 
> optimized reader is used by checking is the projected column are simple or 
> not.
> This is an optimization waiting for the fast parquet reader to handle complex 
> structure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5842) Refactor and simplify the fragment, operator contexts for testing

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225780#comment-16225780
 ] 

ASF GitHub Bot commented on DRILL-5842:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/978#discussion_r147842797
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/physical/unit/MiniPlanUnitTestBase.java
 ---
@@ -360,7 +366,7 @@ public T columnsToRead(String ... columnsToRead) {
*/
   public class JsonScanBuilder extends ScanPopBuider {
 List jsonBatches = null;
-List inputPaths = Collections.EMPTY_LIST;
+List inputPaths = Collections.emptyList();
--- End diff --

This is a subtle point. Using the constant creates the expression:

```
public static final List EMPTY_LIST = new EmptyList<>(); // Definition
List inputPaths = EMPTY_LIST; // Original code
```

The above is not type-friendly: we are setting a typed list (`inputPaths`) 
to an untyped constant (`EMPTY_LIST`).

The revised code uses Java's parameterized methods to work around the type 
ambiguity:

```
public static final  List emptyList() ... // Definition
List inputPaths = Collections.emptyList(); // Type-safe assignment
```

Functionally, the two expressions are identical. But, the original was 
type-unsafe and generated a compiler warning. The new one is type-safe and 
resolves the warning.


> Refactor and simplify the fragment, operator contexts for testing
> -
>
> Key: DRILL-5842
> URL: https://issues.apache.org/jira/browse/DRILL-5842
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
>
> Drill's execution engine has a "fragment context" that provides state for a 
> fragment as a whole, and an "operator context" which provides state for a 
> single operator. Historically, these have both been concrete classes that 
> make generous references to the Drillbit context, and hence need a full Drill 
> server in order to operate.
> Drill has historically made extensive use of system-level testing: build the 
> entire server and fire queries at it to test each component. Over time, we 
> are augmenting that approach with unit tests: the ability to test each 
> operator (or parts of an operator) in isolation.
> Since each operator requires access to both the operator and fragment 
> context, the fact that the contexts depend on the overall server creates a 
> large barrier to unit testing. An earlier checkin started down the path of 
> defining the contexts as interfaces that can have different run-time and 
> test-time implementations to enable testing.
> This ticket asks to refactor those interfaces: simplifying the operator 
> context and introducing an interface for the fragment context. New code will 
> use these new interfaces, while older code continues to use the concrete 
> implementations. Over time, as operators are enhanced, they can be modified 
> to allow unit-level testing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5842) Refactor and simplify the fragment, operator contexts for testing

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225772#comment-16225772
 ] 

ASF GitHub Bot commented on DRILL-5842:
---

Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/978
  
@jinfengni, sorry my description was a bit ambiguous. This PR has two goals:

1. Clean up the contexts based on what has been learned recently.
2. Evolve the contexts to allow "sub-operator" unit testing without mocks.

Drill has many tests that use mocks and these are unaffected by the change. 
The PR simply allows new tests to do sub-operator testing without mocks, when 
convenient. 


> Refactor and simplify the fragment, operator contexts for testing
> -
>
> Key: DRILL-5842
> URL: https://issues.apache.org/jira/browse/DRILL-5842
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
>
> Drill's execution engine has a "fragment context" that provides state for a 
> fragment as a whole, and an "operator context" which provides state for a 
> single operator. Historically, these have both been concrete classes that 
> make generous references to the Drillbit context, and hence need a full Drill 
> server in order to operate.
> Drill has historically made extensive use of system-level testing: build the 
> entire server and fire queries at it to test each component. Over time, we 
> are augmenting that approach with unit tests: the ability to test each 
> operator (or parts of an operator) in isolation.
> Since each operator requires access to both the operator and fragment 
> context, the fact that the contexts depend on the overall server creates a 
> large barrier to unit testing. An earlier checkin started down the path of 
> defining the contexts as interfaces that can have different run-time and 
> test-time implementations to enable testing.
> This ticket asks to refactor those interfaces: simplifying the operator 
> context and introducing an interface for the fragment context. New code will 
> use these new interfaces, while older code continues to use the concrete 
> implementations. Over time, as operators are enhanced, they can be modified 
> to allow unit-level testing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5911) Upgrade esri-geometry-api version to 2.0.0 to avoid dependency on org.json library

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225742#comment-16225742
 ] 

ASF GitHub Bot commented on DRILL-5911:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1012


> Upgrade esri-geometry-api version to 2.0.0 to avoid dependency on org.json 
> library
> --
>
> Key: DRILL-5911
> URL: https://issues.apache.org/jira/browse/DRILL-5911
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.11.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> Currently, {{drill-gis}} module uses {{esri-geometry-api}} library version 
> 1.2.1. This version of the library has the dependency on {{org.json}} library:
> {noformat}
> [INFO] org.apache.drill.contrib:drill-gis:jar:1.12.0-SNAPSHOT
> [INFO] \- com.esri.geometry:esri-geometry-api:jar:1.2.1:compile
> [INFO]\- org.json:json:jar:20090211:compile
> {noformat}
> In {{esri-geometry-api}} v.2.0.0 this dependency on {{org.json}} was removed: 
> https://github.com/Esri/geometry-api-java/commit/9bedde397f2f61675bc687b95875893aa7cd7f2f.
>  
> So we need also update the version of this library to avoid transitive 
> dependency on {{org.json}} form {{drill-gis}} module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5905) Exclude jdk-tools from project dependencies

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225738#comment-16225738
 ] 

ASF GitHub Bot commented on DRILL-5905:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1009


> Exclude jdk-tools from project dependencies
> ---
>
> Key: DRILL-5905
> URL: https://issues.apache.org/jira/browse/DRILL-5905
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build & Test
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Minor
>  Labels: ready-to-commit
>
> hadoop-annotations and hbase-annotations have system scope dependency on JDK 
> tools.jar. This dependency is provided by JDK and should be excluded from the 
> project dependencies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5910) Logging exception when custom AuthenticatorFactory not found

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225741#comment-16225741
 ] 

ASF GitHub Bot commented on DRILL-5910:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1013


> Logging exception when  custom AuthenticatorFactory not found
> -
>
> Key: DRILL-5910
> URL: https://issues.apache.org/jira/browse/DRILL-5910
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> We need to log the exception when any of custom AuthenticatorFactory fails to 
> be instansiated in ClientAuthenticatorProvider constructor. We are doing this 
> to allow drill to use other available AuthenticatorFactory
> Steps to repoduce:
> 1) Configure plain authentication
> 2) Add 
> -Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
>  to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
> 3) Run sqlline and connect to drillbit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5895) Fix unit tests for mongo storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225740#comment-16225740
 ] 

ASF GitHub Bot commented on DRILL-5895:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1006


> Fix unit tests for mongo storage plugin
> ---
>
> Key: DRILL-5895
> URL: https://issues.apache.org/jira/browse/DRILL-5895
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - MongoDB
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> Mongo tests finish with following exception intermittently. It happens 
> because  [timeout 
> value|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/blob/1.7/src/main/java/de/flapdoodle/embed/process/runtime/ProcessControl.java#L132]
>  from   de.flapdoodle.embed.process library is too low for the mongod process 
> to be stopped gracefully. 
> I have created 
> [issue|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/issues/64]
>  to suggest making timeout configurable. For now as temporary solution we 
> will log mongod exception instead of throwing it.
> {noformat}
> [mongod output] Exception in thread "Thread-25" 
> java.lang.IllegalStateException: Couldn't kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Exception in thread "Thread-13" java.lang.IllegalStateException: Couldn't 
> kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Results :
> Tests in error: 
>   MongoTestSuit.tearDownCluster:260 ยป IllegalState Couldn't kill mongod 
> process!...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5906) java.lang.NullPointerException while quering Hive ORC tables on MapR cluster.

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225739#comment-16225739
 ] 

ASF GitHub Bot commented on DRILL-5906:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1010


> java.lang.NullPointerException while quering Hive ORC tables on MapR cluster. 
> --
>
> Key: DRILL-5906
> URL: https://issues.apache.org/jira/browse/DRILL-5906
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Vitalii Diravka
>Assignee: Vitalii Diravka
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
> Attachments: bucketed_table.zip
>
>
> Record reader throw an exception when trying to read an empty split.
> To reproduce the issue - put the bucketed table with ORC files (one or more 
> are empty) onto the maprfs and 
> run the following Hive DDL: 
> {code}
> CREATE TABLE `orc_bucketed`(
>   `id` int,
>   `name` string)
> CLUSTERED BY (
>   id)
> INTO 2 BUCKETS
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
> LOCATION
>   'maprfs:/tmp/bucketed_table/';
> {code}
> Possible fix: upgrade drill to 
> [1.2.0-mapr-1707|https://maprdocs.mapr.com/52/EcosystemRN/HiveRN-1.2.1-1707.html]
>  hive.version, where this issue was fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-1499) Different column order could appear in the result set for a schema-less select * query, even there are no changing schemas.

2017-10-30 Thread Paul Rogers (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225680#comment-16225680
 ] 

Paul Rogers commented on DRILL-1499:


The work being done in the "batch size control" project revises the projection 
mechanism and resolves issues like this by ensuring schema (and vector) 
persistence whenever possible.

> Different column order could appear in the result set for a schema-less 
> select * query, even there are no changing schemas.
> ---
>
> Key: DRILL-1499
> URL: https://issues.apache.org/jira/browse/DRILL-1499
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Jinfeng Ni
>Assignee: Vitalii Diravka
> Fix For: 1.12.0
>
>
> For a select * query referring to a schema-less table, Drill could return 
> different column, depending on the physical operators the query involves:
> Q1:
> {code}
> select * from cp.`employee.json` limit 3;
> +-++++-+++---++++---+-+++-+
> | employee_id | full_name  | first_name | last_name  | position_id | 
> position_title |  store_id  | department_id | birth_date | hire_date  |   
> salary   | supervisor_id | education_level | marital_status |   gender   | 
> management_role |
> +-++++-+++---++++---+-+++-+
> {code}
> Q2:
> {code}
> select * from cp.`employee.json` order by last_name limit 3;
> ++---+-+-++++++-++-++++---+
> | birth_date | department_id | education_level | employee_id | first_name | 
> full_name  |   gender   | hire_date  | last_name  | management_role | 
> marital_status | position_id | position_title |   salary   |  store_id  | 
> supervisor_id |
> ++---+-+-++++++-++-++++---+
> {code}
> The difference between Q1 and Q2 is the order by clause.  With order by 
> clause in Q2, Drill will sort the column names alphabetically, while for Q1, 
> the column names are in the same order as in the data source. 
> The underlying cause for such difference is that the sort or sort-based 
> merger operator would require canonicalization, since the incoming batches 
> could contain different schemas. 
>  However, it would be better that such canonicalization is used only when the 
> incoming batches have changing schemas. If all the incoming batches have 
> identical schemas, no need to sort the column orders.  With this fix, Drill 
> will present the same column order in the result set, for a schema-less 
> select * query,  if there is no changing schemas from incoming data sources. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5878) TableNotFound exception is being reported for a wrong storage plugin.

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225661#comment-16225661
 ] 

ASF GitHub Bot commented on DRILL-5878:
---

Github user HanumathRao commented on the issue:

https://github.com/apache/drill/pull/996
  
@arina-ielchiieva Thank you for the review comments. I have modified the 
code accordingly. Please let me know if anything needs to be changed.


> TableNotFound exception is being reported for a wrong storage plugin.
> -
>
> Key: DRILL-5878
> URL: https://issues.apache.org/jira/browse/DRILL-5878
> Project: Apache Drill
>  Issue Type: Bug
>  Components: SQL Parser
>Affects Versions: 1.11.0
>Reporter: Hanumath Rao Maduri
>Assignee: Hanumath Rao Maduri
>Priority: Minor
> Fix For: 1.12.0
>
>
> Drill is reporting TableNotFound exception for a wrong storage plugin. 
> Consider the following query where employee.json is queried using cp plugin.
> {code}
> 0: jdbc:drill:zk=local> select * from cp.`employee.json` limit 10;
> +--++-++--+-+---++-++--++---+-+-++
> | employee_id  | full_name  | first_name  | last_name  | position_id  
> | position_title  | store_id  | department_id  | birth_date  |   
> hire_date|  salary  | supervisor_id  |  education_level  | 
> marital_status  | gender  |  management_role   |
> +--++-++--+-+---++-++--++---+-+-++
> | 1| Sheri Nowmer   | Sheri   | Nowmer | 1
> | President   | 0 | 1  | 1961-08-26  | 
> 1994-12-01 00:00:00.0  | 8.0  | 0  | Graduate Degree   | S
>| F   | Senior Management  |
> | 2| Derrick Whelply| Derrick | Whelply| 2
> | VP Country Manager  | 0 | 1  | 1915-07-03  | 
> 1994-12-01 00:00:00.0  | 4.0  | 1  | Graduate Degree   | M
>| M   | Senior Management  |
> | 4| Michael Spence | Michael | Spence | 2
> | VP Country Manager  | 0 | 1  | 1969-06-20  | 
> 1998-01-01 00:00:00.0  | 4.0  | 1  | Graduate Degree   | S
>| M   | Senior Management  |
> | 5| Maya Gutierrez | Maya| Gutierrez  | 2
> | VP Country Manager  | 0 | 1  | 1951-05-10  | 
> 1998-01-01 00:00:00.0  | 35000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 6| Roberta Damstra| Roberta | Damstra| 3
> | VP Information Systems  | 0 | 2  | 1942-10-08  | 
> 1994-12-01 00:00:00.0  | 25000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 7| Rebecca Kanagaki   | Rebecca | Kanagaki   | 4
> | VP Human Resources  | 0 | 3  | 1949-03-27  | 
> 1994-12-01 00:00:00.0  | 15000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 8| Kim Brunner| Kim | Brunner| 11   
> | Store Manager   | 9 | 11 | 1922-08-10  | 
> 1998-01-01 00:00:00.0  | 1.0  | 5  | Bachelors Degree  | S
>| F   | Store Management   |
> | 9| Brenda Blumberg| Brenda  | Blumberg   | 11   
> | Store Manager   | 21| 11 | 1979-06-23  | 
> 1998-01-01 00:00:00.0  | 17000.0  | 5  | Graduate Degree   | M
>| F   | Store Management   |
> | 10   | Darren Stanz   | Darren  | Stanz  | 5
> | VP Finance  | 0 | 5  | 1949-08-26  | 
> 1994-12-01 00:00:00.0  | 5.0  | 1  | Partial College   | M
>| M   | Senior Management  |
> | 11   | Jonathan Murraiin  | Jonathan| Murraiin   | 11   
> | Store Manager   | 1 | 11 | 1967-06-20  | 
> 1998-01-01 00:00:00.0  | 15000.0  | 5  | Graduate Degree   | S
>| M   | Store Management   |
> 

[jira] [Commented] (DRILL-5878) TableNotFound exception is being reported for a wrong storage plugin.

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225660#comment-16225660
 ] 

ASF GitHub Bot commented on DRILL-5878:
---

Github user HanumathRao commented on a diff in the pull request:

https://github.com/apache/drill/pull/996#discussion_r147818548
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/SchemaUtilites.java
 ---
@@ -77,6 +77,22 @@ public static SchemaPlus findSchema(final SchemaPlus 
defaultSchema, final String
 return findSchema(defaultSchema, schemaPathAsList);
   }
 
+  /**
+   * Utility function to get the commonPrefix schema between two supplied 
schemas.
--- End diff --

done.


> TableNotFound exception is being reported for a wrong storage plugin.
> -
>
> Key: DRILL-5878
> URL: https://issues.apache.org/jira/browse/DRILL-5878
> Project: Apache Drill
>  Issue Type: Bug
>  Components: SQL Parser
>Affects Versions: 1.11.0
>Reporter: Hanumath Rao Maduri
>Assignee: Hanumath Rao Maduri
>Priority: Minor
> Fix For: 1.12.0
>
>
> Drill is reporting TableNotFound exception for a wrong storage plugin. 
> Consider the following query where employee.json is queried using cp plugin.
> {code}
> 0: jdbc:drill:zk=local> select * from cp.`employee.json` limit 10;
> +--++-++--+-+---++-++--++---+-+-++
> | employee_id  | full_name  | first_name  | last_name  | position_id  
> | position_title  | store_id  | department_id  | birth_date  |   
> hire_date|  salary  | supervisor_id  |  education_level  | 
> marital_status  | gender  |  management_role   |
> +--++-++--+-+---++-++--++---+-+-++
> | 1| Sheri Nowmer   | Sheri   | Nowmer | 1
> | President   | 0 | 1  | 1961-08-26  | 
> 1994-12-01 00:00:00.0  | 8.0  | 0  | Graduate Degree   | S
>| F   | Senior Management  |
> | 2| Derrick Whelply| Derrick | Whelply| 2
> | VP Country Manager  | 0 | 1  | 1915-07-03  | 
> 1994-12-01 00:00:00.0  | 4.0  | 1  | Graduate Degree   | M
>| M   | Senior Management  |
> | 4| Michael Spence | Michael | Spence | 2
> | VP Country Manager  | 0 | 1  | 1969-06-20  | 
> 1998-01-01 00:00:00.0  | 4.0  | 1  | Graduate Degree   | S
>| M   | Senior Management  |
> | 5| Maya Gutierrez | Maya| Gutierrez  | 2
> | VP Country Manager  | 0 | 1  | 1951-05-10  | 
> 1998-01-01 00:00:00.0  | 35000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 6| Roberta Damstra| Roberta | Damstra| 3
> | VP Information Systems  | 0 | 2  | 1942-10-08  | 
> 1994-12-01 00:00:00.0  | 25000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 7| Rebecca Kanagaki   | Rebecca | Kanagaki   | 4
> | VP Human Resources  | 0 | 3  | 1949-03-27  | 
> 1994-12-01 00:00:00.0  | 15000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 8| Kim Brunner| Kim | Brunner| 11   
> | Store Manager   | 9 | 11 | 1922-08-10  | 
> 1998-01-01 00:00:00.0  | 1.0  | 5  | Bachelors Degree  | S
>| F   | Store Management   |
> | 9| Brenda Blumberg| Brenda  | Blumberg   | 11   
> | Store Manager   | 21| 11 | 1979-06-23  | 
> 1998-01-01 00:00:00.0  | 17000.0  | 5  | Graduate Degree   | M
>| F   | Store Management   |
> | 10   | Darren Stanz   | Darren  | Stanz  | 5
> | VP Finance  | 0 | 5  | 1949-08-26  | 
> 1994-12-01 00:00:00.0  | 5.0  | 1  | Partial College   | M
>| M   | Senior Management  |
> | 11   | Jonathan Murraiin  | Jonathan| Murraiin   | 11 

[jira] [Commented] (DRILL-5874) NPE in AnonWebUserConnection.cleanupSession()

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225655#comment-16225655
 ] 

ASF GitHub Bot commented on DRILL-5874:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/993


> NPE in AnonWebUserConnection.cleanupSession()
> -
>
> Key: DRILL-5874
> URL: https://issues.apache.org/jira/browse/DRILL-5874
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Paul Rogers
>Assignee: Sorabh Hamirwasia
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> When debugging another issue, I tried to use the Web UI to run the example 
> query:
> {code}
> SELECT * FROM cp.`employee.json` LIMIT 20
> {code}
> The query failed with this error:
> {noformat}
> Query Failed: An Error Occurred
> java.lang.NullPointerException
> {noformat}
> No stack trace was provided in the log, even at DEBUG level.
> Debugging, the problem appears to be deep inside 
> {{AnonWebUserConnection.cleanupSession()}}:
> {code}
> package io.netty.channel;
> public class DefaultChannelPromise ...
> protected EventExecutor executor() {
> EventExecutor e = super.executor();
> if (e == null) {
> return channel().eventLoop();
> } else {
> return e;
> }
> }
> {code}
> In the above, {{channel()}} is null. the {{channel}} field is also null.
> This may indicate that some part of the Web UI was not set up correctly. This 
> is a recent change, as this code worked several days ago.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-3101) Setting "slice_target" to 1 changes the order of the columns in a "select *" query with order by

2017-10-30 Thread Vitalii Diravka (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225607#comment-16225607
 ] 

Vitalii Diravka commented on DRILL-3101:


The issue is raised in the context of DRILL-5822 and DRILL-5845 again.

> Setting "slice_target" to 1 changes the order of the columns in a "select *" 
> query with order by
> 
>
> Key: DRILL-3101
> URL: https://issues.apache.org/jira/browse/DRILL-3101
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Reporter: Rahul Challapalli
>Assignee: Chris Westin
> Fix For: 1.0.0
>
>
> git.commit.id.abbrev=d8b1975
> With Default Settings :
> {code}
> select * from region order by length(r_name);
> +-++---+
> | r_regionkey | r_name | r_comment |
> +-++---+
> | 2 | ASIA | ges. thinly even pinto beans ca |
> | 0 | AFRICA | lar deposits. blithely final packages cajole. regular waters 
> are final requests. regular accounts are according to  |
> | 3 | EUROPE | ly final courts cajole furiously final excuse |
> | 1 | AMERICA | hs use ironic, even requests. s |
> | 4 | MIDDLE EAST | uickly special accounts cajole carefully blithely close 
> requests. carefully final asymptotes haggle furiousl |
> {code}
> Now after setting the slice target to 1, the order of the columns changed
> {code}
> 0: jdbc:drill:schema=dfs_eea> alter session set `planner.slice_target` = 1;
> +---++
> |  ok   |summary |
> +---++
> | true  | planner.slice_target updated.  |
> +---++
> 1 row selected (0.11 seconds)
> 0: jdbc:drill:schema=dfs_eea> select * from region order by length(r_name);
> +---++-+
> | r_comment | r_name | r_regionkey |
> +---++-+
> | ges. thinly even pinto beans ca | ASIA | 2 |
> | lar deposits. blithely final packages cajole. regular waters are final 
> requests. regular accounts are according to  | AFRICA | 0 |
> | ly final courts cajole furiously final excuse | EUROPE | 3 |
> | hs use ironic, even requests. s | AMERICA | 1 |
> | uickly special accounts cajole carefully blithely close requests. carefully 
> final asymptotes haggle furiousl | MIDDLE EAST | 4 |
> +---++-+
> 5 rows selected (0.796 seconds)
> {code}
> This does not happen when we do not use an "order by" in query



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (DRILL-1499) Different column order could appear in the result set for a schema-less select * query, even there are no changing schemas.

2017-10-30 Thread Vitalii Diravka (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitalii Diravka resolved DRILL-1499.

   Resolution: Resolved
 Assignee: Vitalii Diravka  (was: Steven Phillips)
Fix Version/s: (was: Future)
   1.12.0

There is no need to canonicalize the batch or container since RecordBatchLoader 
swallows the Schema Change for now if two batches has different column ordering.
Resolved in context of DRILL-5845

> Different column order could appear in the result set for a schema-less 
> select * query, even there are no changing schemas.
> ---
>
> Key: DRILL-1499
> URL: https://issues.apache.org/jira/browse/DRILL-1499
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Jinfeng Ni
>Assignee: Vitalii Diravka
> Fix For: 1.12.0
>
>
> For a select * query referring to a schema-less table, Drill could return 
> different column, depending on the physical operators the query involves:
> Q1:
> {code}
> select * from cp.`employee.json` limit 3;
> +-++++-+++---++++---+-+++-+
> | employee_id | full_name  | first_name | last_name  | position_id | 
> position_title |  store_id  | department_id | birth_date | hire_date  |   
> salary   | supervisor_id | education_level | marital_status |   gender   | 
> management_role |
> +-++++-+++---++++---+-+++-+
> {code}
> Q2:
> {code}
> select * from cp.`employee.json` order by last_name limit 3;
> ++---+-+-++++++-++-++++---+
> | birth_date | department_id | education_level | employee_id | first_name | 
> full_name  |   gender   | hire_date  | last_name  | management_role | 
> marital_status | position_id | position_title |   salary   |  store_id  | 
> supervisor_id |
> ++---+-+-++++++-++-++++---+
> {code}
> The difference between Q1 and Q2 is the order by clause.  With order by 
> clause in Q2, Drill will sort the column names alphabetically, while for Q1, 
> the column names are in the same order as in the data source. 
> The underlying cause for such difference is that the sort or sort-based 
> merger operator would require canonicalization, since the incoming batches 
> could contain different schemas. 
>  However, it would be better that such canonicalization is used only when the 
> incoming batches have changing schemas. If all the incoming batches have 
> identical schemas, no need to sort the column orders.  With this fix, Drill 
> will present the same column order in the result set, for a schema-less 
> select * query,  if there is no changing schemas from incoming data sources. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5582) [Threat Modeling] Drillbit may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Drillbit

2017-10-30 Thread Laurent Goujon (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225474#comment-16225474
 ] 

Laurent Goujon commented on DRILL-5582:
---

It seems this change is backward incompatible (newer JDBC/C++ connector cannot 
be used with an older server server). I'm not sure to follow the threat model 
either: since there's no server authentication (something TLS would solve), 
what prevents a MITM drillbit to just fake authentication?

> [Threat Modeling] Drillbit may be spoofed by an attacker and this may lead to 
> data being written to the attacker's target instead of Drillbit
> -
>
> Key: DRILL-5582
> URL: https://issues.apache.org/jira/browse/DRILL-5582
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Rob Wu
>Assignee: Sorabh Hamirwasia
>Priority: Minor
>  Labels: doc-impacting
> Fix For: 1.12.0
>
>
> *Consider the scenario:*
> Alice has a drillbit (my.drillbit.co) with plain and kerberos authentication 
> enabled containing important data. Bob, the attacker, attempts to spoof the 
> connection and redirect it to his own drillbit (fake.drillbit.co) with no 
> authentication setup. 
> When Alice is under attack and attempts to connect to her secure drillbit, 
> she is actually authenticating against Bob's drillbit. At this point, the 
> connection should have failed due to unmatched configuration. However, the 
> current implementation will return SUCCESS as long as the (spoofing) drillbit 
> has no authentication requirement set.
> Currently, the drillbit <-  to  -> drill client connection accepts the lowest 
> authentication configuration set on the server. This leaves unsuspecting user 
> vulnerable to spoofing. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5896) Handle vector creation in HbaseRecordReader to avoid NullableInt vectors later

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225365#comment-16225365
 ] 

ASF GitHub Bot commented on DRILL-5896:
---

Github user prasadns14 commented on the issue:

https://github.com/apache/drill/pull/1005
  
@paul-rogers please review the changes


> Handle vector creation in HbaseRecordReader to avoid NullableInt vectors later
> --
>
> Key: DRILL-5896
> URL: https://issues.apache.org/jira/browse/DRILL-5896
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - HBase
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
> Fix For: 1.12.0
>
>
> When a hbase query projects both a column family and a column in the column 
> family, the vector for the column is not created in the HbaseRecordReader.
> So, in cases where scan batch is empty we create a NullableInt vector for 
> this column. We need to handle column creation in the reader.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5834) Add Networking Functions

2017-10-30 Thread Paul Rogers (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Rogers updated DRILL-5834:
---
Labels: doc-impacting  (was: doc-impacting ready-to-commit)

> Add Networking Functions
> 
>
> Key: DRILL-5834
> URL: https://issues.apache.org/jira/browse/DRILL-5834
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Affects Versions: 1.11.0
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Minor
>  Labels: doc-impacting
> Fix For: 1.12.0
>
>
> On the heels of the PCAP plugin, this is a collection of functions that would 
> facilitate network analysis using Drill. 
> The functions include:
> inet_aton(): Converts an IPv4 address into an integer.
> inet_ntoa( ): Converts an integer IP into dotted decimal notation
> in_network( , ): Returns true if the IP address is in the given 
> CIDR block
> address_count(  ): Returns the number of IPs in a given CIDR block
> broadcast_address(  ): Returns the broadcast address for a given CIDR 
> block
> netmask( ): Returns the netmask for a given CIDR block.
> low_address(): Returns the first address in a given CIDR block.
> high_address(): Returns the last address in a given CIDR block.
> url_encode(  ): Returns a URL encoded string.
> url_decode(  ): Decodes a URL encoded string.
> is_valid_IP(): Returns true if the IP is a valid IP address
> is_private_ip(): Returns true if the IP is a private IPv4 address
> is_valid_IPv4(): Returns true if the IP is a valid IPv4 address
> is_valid_IPv6(): Returns true if the IP is a valid IPv6 address



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5910) Logging exception when custom AuthenticatorFactory not found

2017-10-30 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5910:

Reviewer: Vlad Rozov  (was: Arina Ielchiieva)

> Logging exception when  custom AuthenticatorFactory not found
> -
>
> Key: DRILL-5910
> URL: https://issues.apache.org/jira/browse/DRILL-5910
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> We need to log the exception when any of custom AuthenticatorFactory fails to 
> be instansiated in ClientAuthenticatorProvider constructor. We are doing this 
> to allow drill to use other available AuthenticatorFactory
> Steps to repoduce:
> 1) Configure plain authentication
> 2) Add 
> -Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
>  to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
> 3) Run sqlline and connect to drillbit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5910) Logging exception when custom AuthenticatorFactory not found

2017-10-30 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5910:

Labels: ready-to-commit  (was: )

> Logging exception when  custom AuthenticatorFactory not found
> -
>
> Key: DRILL-5910
> URL: https://issues.apache.org/jira/browse/DRILL-5910
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> We need to log the exception when any of custom AuthenticatorFactory fails to 
> be instansiated in ClientAuthenticatorProvider constructor. We are doing this 
> to allow drill to use other available AuthenticatorFactory
> Steps to repoduce:
> 1) Configure plain authentication
> 2) Add 
> -Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
>  to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
> 3) Run sqlline and connect to drillbit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5910) Logging exception when custom AuthenticatorFactory not found

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16225120#comment-16225120
 ] 

ASF GitHub Bot commented on DRILL-5910:
---

Github user vrozov commented on the issue:

https://github.com/apache/drill/pull/1013
  
LGTM


> Logging exception when  custom AuthenticatorFactory not found
> -
>
> Key: DRILL-5910
> URL: https://issues.apache.org/jira/browse/DRILL-5910
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>Priority: Minor
> Fix For: 1.12.0
>
>
> We need to log the exception when any of custom AuthenticatorFactory fails to 
> be instansiated in ClientAuthenticatorProvider constructor. We are doing this 
> to allow drill to use other available AuthenticatorFactory
> Steps to repoduce:
> 1) Configure plain authentication
> 2) Add 
> -Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
>  to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
> 3) Run sqlline and connect to drillbit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (DRILL-5768) Drill planer should not allow select * with group by clause

2017-10-30 Thread Arina Ielchiieva (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224886#comment-16224886
 ] 

Arina Ielchiieva edited comment on DRILL-5768 at 10/30/17 1:32 PM:
---

Issue to ban star in queries with group by was resolved in Calcite 1.5 
(https://issues.apache.org/jira/browse/CALCITE-546).

The root cause is that 
[AggChecker|https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/sql/validate/AggChecker.java#L93]
 does not do full validation and exits too early.
The key change was made in how star in represented  in 
[SqlIdentifier|https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/sql/SqlIdentifier.java#L359].

I suggest we wait for Calcite upgrade.


was (Author: arina):
Issue to ban star in queries with group by was resolved in Calcite 1.5 
(https://issues.apache.org/jira/browse/CALCITE-546).

The root cause is that 
[AggChecker|https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/sql/validate/AggChecker.java#L93]
 does not do full validation and exits, too early.
The key change was made in how star in represented  in 
[SqlIdentifier|https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/sql/SqlIdentifier.java#L359].

I suggest we wait for Calcite upgrade.

> Drill planer should not allow select * with group by clause
> ---
>
> Key: DRILL-5768
> URL: https://issues.apache.org/jira/browse/DRILL-5768
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Jinfeng Ni
>Assignee: Roman Kulyk
>
> The following query should not be allowed in Drill planner.
> {code}
> select * from cp.`tpch/nation.parquet` group by n_regionkey;
> ++
> | *  |
> ++
> | 0  |
> | 1  |
> | 4  |
> | 3  |
> | 2  |
> ++
> {code}
> However, Drill allow such query to run and even worse the result is 
> incorrect.  It would make sense that we block such type of query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (DRILL-5768) Drill planer should not allow select * with group by clause

2017-10-30 Thread Arina Ielchiieva (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224886#comment-16224886
 ] 

Arina Ielchiieva edited comment on DRILL-5768 at 10/30/17 1:32 PM:
---

Issue to ban star in queries with group by was resolved in Calcite 1.5 
(https://issues.apache.org/jira/browse/CALCITE-546).

The root cause is that 
[AggChecker|https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/sql/validate/AggChecker.java#L93]
 does not do full validation and exits, too early.
The key change was made in how star in represented  in 
[SqlIdentifier|https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/sql/SqlIdentifier.java#L359].

I suggest we wait for Calcite upgrade.


was (Author: arina):
Issue to ban star in queries with group by was resolved in Calcite 1.5 
(https://issues.apache.org/jira/browse/CALCITE-546).

The root cause is that 
[AggChecker|https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/sql/validate/AggChecker.java#L93]
 does not do full validation and exists, too early.
The key change was made in how star in represented  in 
[SqlIdentifier|https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/sql/SqlIdentifier.java#L359].

I suggest we wait for Calcite upgrade.

> Drill planer should not allow select * with group by clause
> ---
>
> Key: DRILL-5768
> URL: https://issues.apache.org/jira/browse/DRILL-5768
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Jinfeng Ni
>Assignee: Roman Kulyk
>
> The following query should not be allowed in Drill planner.
> {code}
> select * from cp.`tpch/nation.parquet` group by n_regionkey;
> ++
> | *  |
> ++
> | 0  |
> | 1  |
> | 4  |
> | 3  |
> | 2  |
> ++
> {code}
> However, Drill allow such query to run and even worse the result is 
> incorrect.  It would make sense that we block such type of query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224952#comment-16224952
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583993
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/services/ServiceImpl.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.services;
+
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDB;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.client.query.Query;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import retrofit2.Retrofit;
+import retrofit2.converter.jackson.JacksonConverterFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.AGGREGATOR;
+import static org.apache.drill.exec.store.openTSDB.Constants.DOWNSAMPLE;
+import static org.apache.drill.exec.store.openTSDB.Constants.METRIC;
+import static org.apache.drill.exec.store.openTSDB.Util.isTableNameValid;
+import static org.apache.drill.exec.store.openTSDB.Util.parseFROMRowData;
+
+public class ServiceImpl implements Service {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(ServiceImpl.class);
+
+  private OpenTSDB client;
+  private Map queryParameters;
+
+  public ServiceImpl(String connectionURL) {
+this.client = new Retrofit.Builder()
+.baseUrl(connectionURL)
+.addConverterFactory(JacksonConverterFactory.create())
+.build()
+.create(OpenTSDB.class);
+  }
+
+  @Override
+  public Set getTablesFromDB() {
+return getAllMetricsByTags();
+  }
+
+  @Override
+  public Set getAllTableNames() {
+return getTableNames();
+  }
+
+  @Override
+  public List getUnfixedColumnsToSchema() {
+Set tables = getAllMetricsByTags();
+List unfixedColumns = new ArrayList<>();
+
+for (MetricDTO table : tables) {
+  for (String tag : table.getTags().keySet()) {
+ColumnDTO tmp = new ColumnDTO(tag, OpenTSDBTypes.STRING);
+if (!unfixedColumns.contains(tmp)) {
+  unfixedColumns.add(tmp);
+}
+  }
+}
+return unfixedColumns;
+  }
+
+  @Override
+  public void setupQueryParameters(String rowData) {
+if (!isTableNameValid(rowData)) {
+  this.queryParameters = parseFROMRowData(rowData);
+} else {
+  Map params = new HashMap<>();
+  params.put(METRIC, rowData);
+  this.queryParameters = params;
+}
+  }
+
+  private Set getAllMetricsByTags() {
+try {
+  return getAllMetricsFromDBByTags();
+} catch (IOException e) {
+  logIOException(e);
+  return Collections.emptySet();
+}
+  }
+
+  private Set getTableNames() {
+try {
+  return client.getAllTablesName().execute().body();
+} catch (IOException e) {
+  e.printStackTrace();
+  return Collections.emptySet();
+}
+  }
+
+  private Set getAllTablesWithSpecialTag(DBQuery base) throws 
IOException {
+return 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224944#comment-16224944
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147582484
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/OpenTSDBTypes.java
 ---
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client;
+
+/**
+ * Types in openTSDB records,
+ * used for converting openTSDB data to Sql representation
+ */
+public enum OpenTSDBTypes {
+  STRING,
+  DOUBLE,
+  TIMESTAMP,
--- End diff --

Comma can be removed.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224950#comment-16224950
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583640
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/services/ServiceImpl.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.services;
+
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDB;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.client.query.Query;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import retrofit2.Retrofit;
+import retrofit2.converter.jackson.JacksonConverterFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.AGGREGATOR;
+import static org.apache.drill.exec.store.openTSDB.Constants.DOWNSAMPLE;
+import static org.apache.drill.exec.store.openTSDB.Constants.METRIC;
+import static org.apache.drill.exec.store.openTSDB.Util.isTableNameValid;
+import static org.apache.drill.exec.store.openTSDB.Util.parseFROMRowData;
+
+public class ServiceImpl implements Service {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(ServiceImpl.class);
+
+  private OpenTSDB client;
+  private Map queryParameters;
+
+  public ServiceImpl(String connectionURL) {
+this.client = new Retrofit.Builder()
+.baseUrl(connectionURL)
+.addConverterFactory(JacksonConverterFactory.create())
+.build()
+.create(OpenTSDB.class);
+  }
+
+  @Override
+  public Set getTablesFromDB() {
+return getAllMetricsByTags();
+  }
+
+  @Override
+  public Set getAllTableNames() {
+return getTableNames();
+  }
+
+  @Override
+  public List getUnfixedColumnsToSchema() {
+Set tables = getAllMetricsByTags();
+List unfixedColumns = new ArrayList<>();
+
+for (MetricDTO table : tables) {
+  for (String tag : table.getTags().keySet()) {
+ColumnDTO tmp = new ColumnDTO(tag, OpenTSDBTypes.STRING);
+if (!unfixedColumns.contains(tmp)) {
+  unfixedColumns.add(tmp);
+}
+  }
+}
+return unfixedColumns;
+  }
+
+  @Override
+  public void setupQueryParameters(String rowData) {
+if (!isTableNameValid(rowData)) {
+  this.queryParameters = parseFROMRowData(rowData);
+} else {
+  Map params = new HashMap<>();
+  params.put(METRIC, rowData);
+  this.queryParameters = params;
+}
+  }
+
+  private Set getAllMetricsByTags() {
+try {
+  return getAllMetricsFromDBByTags();
+} catch (IOException e) {
+  logIOException(e);
--- End diff --

Why do we log error rather than fail? Is failure non critical?


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224953#comment-16224953
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147584453
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/schema/OpenTSDBSchemaFactory.java
 ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.schema;
+
+import com.google.common.collect.Maps;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.drill.exec.planner.logical.CreateTableEntry;
+import org.apache.drill.exec.planner.logical.DrillTable;
+import org.apache.drill.exec.planner.logical.DynamicDrillTable;
+import org.apache.drill.exec.store.AbstractSchema;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.SchemaFactory;
+import org.apache.drill.exec.store.openTSDB.DrillOpenTSDBTable;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePlugin;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePluginConfig;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Util.getValidTableName;
+
+public class OpenTSDBSchemaFactory implements SchemaFactory {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBSchemaFactory.class);
+
+  private final String schemaName;
+  private OpenTSDBStoragePlugin plugin;
+
+  public OpenTSDBSchemaFactory(OpenTSDBStoragePlugin plugin, String 
schemaName) {
+this.plugin = plugin;
+this.schemaName = schemaName;
+  }
+
+  @Override
+  public void registerSchemas(SchemaConfig schemaConfig, SchemaPlus 
parent) throws IOException {
+OpenTSDBTables schema = new OpenTSDBTables(schemaName);
+parent.add(schemaName, schema);
+  }
+
+  class OpenTSDBTables extends AbstractSchema {
+private final Map schemaMap = 
Maps.newHashMap();
+
+OpenTSDBTables(String name) {
+  super(Collections.emptyList(), name);
+}
+
+@Override
+public AbstractSchema getSubSchema(String name) {
+  Set tables;
+  if (!schemaMap.containsKey(name)) {
+tables = plugin.getClient().getAllTableNames();
+schemaMap.put(name, new OpenTSDBDatabaseSchema(tables, this, 
name));
+  }
+  return schemaMap.get(name);
+}
+
+@Override
+public Set getSubSchemaNames() {
+  return Collections.emptySet();
+}
+
+@Override
+public Table getTable(String name) {
+  OpenTSDBScanSpec scanSpec = new OpenTSDBScanSpec(name);
+  name = getValidTableName(name);
+  try {
+return new DrillOpenTSDBTable(schemaName, plugin, new 
Schema(plugin.getClient(), name), scanSpec);
+  } catch (Exception e) {
+log.warn("Failure while retrieving openTSDB table {}", name, e);
+return null;
+  }
+}
+
+@Override
+public Set getTableNames() {
+  return plugin.getClient().getAllTableNames();
+}
+
+@Override
+public CreateTableEntry createNewTable(final String tableName, 
List partitionColumns) {
+  return null;
+}
+
+@Override
+public void dropTable(String tableName) {
--- End diff --

Parent behavior is to fail indication that table dropping is not 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224947#comment-16224947
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147584018
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/services/ServiceImpl.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.services;
+
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDB;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.client.query.Query;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import retrofit2.Retrofit;
+import retrofit2.converter.jackson.JacksonConverterFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.AGGREGATOR;
+import static org.apache.drill.exec.store.openTSDB.Constants.DOWNSAMPLE;
+import static org.apache.drill.exec.store.openTSDB.Constants.METRIC;
+import static org.apache.drill.exec.store.openTSDB.Util.isTableNameValid;
+import static org.apache.drill.exec.store.openTSDB.Util.parseFROMRowData;
+
+public class ServiceImpl implements Service {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(ServiceImpl.class);
+
+  private OpenTSDB client;
+  private Map queryParameters;
+
+  public ServiceImpl(String connectionURL) {
+this.client = new Retrofit.Builder()
+.baseUrl(connectionURL)
+.addConverterFactory(JacksonConverterFactory.create())
+.build()
+.create(OpenTSDB.class);
+  }
+
+  @Override
+  public Set getTablesFromDB() {
+return getAllMetricsByTags();
+  }
+
+  @Override
+  public Set getAllTableNames() {
+return getTableNames();
+  }
+
+  @Override
+  public List getUnfixedColumnsToSchema() {
+Set tables = getAllMetricsByTags();
+List unfixedColumns = new ArrayList<>();
+
+for (MetricDTO table : tables) {
+  for (String tag : table.getTags().keySet()) {
+ColumnDTO tmp = new ColumnDTO(tag, OpenTSDBTypes.STRING);
+if (!unfixedColumns.contains(tmp)) {
+  unfixedColumns.add(tmp);
+}
+  }
+}
+return unfixedColumns;
+  }
+
+  @Override
+  public void setupQueryParameters(String rowData) {
+if (!isTableNameValid(rowData)) {
+  this.queryParameters = parseFROMRowData(rowData);
+} else {
+  Map params = new HashMap<>();
+  params.put(METRIC, rowData);
+  this.queryParameters = params;
+}
+  }
+
+  private Set getAllMetricsByTags() {
+try {
+  return getAllMetricsFromDBByTags();
+} catch (IOException e) {
+  logIOException(e);
+  return Collections.emptySet();
+}
+  }
+
+  private Set getTableNames() {
+try {
+  return client.getAllTablesName().execute().body();
+} catch (IOException e) {
+  e.printStackTrace();
+  return Collections.emptySet();
+}
+  }
+
+  private Set getAllTablesWithSpecialTag(DBQuery base) throws 
IOException {
+return 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224938#comment-16224938
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147582692
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBRecordReader.java
 ---
@@ -0,0 +1,263 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.ValidationError;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.TypeProtos.MajorType;
+import org.apache.drill.common.types.TypeProtos.MinorType;
+import org.apache.drill.common.types.Types;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.expr.TypeHelper;
+import org.apache.drill.exec.ops.OperatorContext;
+import org.apache.drill.exec.physical.impl.OutputMutator;
+import org.apache.drill.exec.record.MaterializedField;
+import org.apache.drill.exec.store.AbstractRecordReader;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.apache.drill.exec.vector.NullableFloat8Vector;
+import org.apache.drill.exec.vector.NullableTimeStampVector;
+import org.apache.drill.exec.vector.NullableVarCharVector;
+import org.apache.drill.exec.vector.ValueVector;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+public class OpenTSDBRecordReader extends AbstractRecordReader {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBRecordReader.class);
+
+  private static final Map TYPES;
+
+  private Service db;
+
+  private Iterator tableIterator;
+  private OutputMutator output;
+  private ImmutableList projectedCols;
+  private OpenTSDBSubScan.OpenTSDBSubScanSpec subScanSpec;
+
+  OpenTSDBRecordReader(Service client, OpenTSDBSubScan.OpenTSDBSubScanSpec 
subScanSpec,
+   List projectedColumns) throws 
IOException {
+setColumns(projectedColumns);
+this.db = client;
+this.subScanSpec = subScanSpec;
+db.setupQueryParameters(subScanSpec.getTableName());
+log.debug("Scan spec: {}", subScanSpec);
+  }
+
+  @Override
+  public void setup(OperatorContext context, OutputMutator output) throws 
ExecutionSetupException {
+this.output = output;
+Set tables = db.getTablesFromDB();
+if (tables == null || tables.isEmpty()) {
+  throw new ValidationError(String.format("Table '%s' not found or 
it's empty", subScanSpec.getTableName()));
+}
+this.tableIterator = tables.iterator();
+  }
+
+  @Override
+  public int next() {
+try {
+  return processOpenTSDBTablesData();
+} catch (SchemaChangeException e) {
+  log.info(e.toString());
+  return 0;
+}
+  }
+
+  @Override
+  protected boolean isSkipQuery() {
--- End diff --

Do you need to override this method as you 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224943#comment-16224943
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583417
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/query/DBQuery.java
 ---
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.query;
+
+import java.util.HashSet;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.DEFAULT_TIME;
+
+/**
+ * DBQuery is an abstraction of an openTSDB query,
+ * that used for extracting data from the storage system by POST request 
to DB.
+ * 
+ * An OpenTSDB query requires at least one sub query,
+ * a means of selecting which time series should be included in the result 
set.
+ */
+public class DBQuery {
+
+  /**
+   * The start time for the query. This can be a relative or absolute 
timestamp.
+   */
+  private String start;
+  /**
+   * One or more sub subQueries used to select the time series to return.
+   */
+  private Set queries;
+
+  private DBQuery(Builder builder) {
+this.start = builder.start;
+this.queries = builder.queries;
+  }
+
+  public String getStart() {
+return start;
+  }
+
+  public Set getQueries() {
+return queries;
+  }
+
+  public static class Builder {
+
+private String start = DEFAULT_TIME;
+private Set queries = new HashSet<>();
--- End diff --

So what will happen is queries object will be empty? If eventually we will 
fail, maybe it's better to fail in `Builder`?


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224949#comment-16224949
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147584411
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/schema/OpenTSDBSchemaFactory.java
 ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.schema;
+
+import com.google.common.collect.Maps;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.drill.exec.planner.logical.CreateTableEntry;
+import org.apache.drill.exec.planner.logical.DrillTable;
+import org.apache.drill.exec.planner.logical.DynamicDrillTable;
+import org.apache.drill.exec.store.AbstractSchema;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.SchemaFactory;
+import org.apache.drill.exec.store.openTSDB.DrillOpenTSDBTable;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePlugin;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePluginConfig;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Util.getValidTableName;
+
+public class OpenTSDBSchemaFactory implements SchemaFactory {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBSchemaFactory.class);
+
+  private final String schemaName;
+  private OpenTSDBStoragePlugin plugin;
--- End diff --

final?


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224946#comment-16224946
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147582817
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBStoragePlugin.java
 ---
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.drill.common.JSONOptions;
+import org.apache.drill.exec.server.DrillbitContext;
+import org.apache.drill.exec.store.AbstractStoragePlugin;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.openTSDB.client.services.ServiceImpl;
+import org.apache.drill.exec.store.openTSDB.schema.OpenTSDBSchemaFactory;
+
+import java.io.IOException;
+
+public class OpenTSDBStoragePlugin extends AbstractStoragePlugin {
+
+  private final DrillbitContext context;
+
+  private final OpenTSDBStoragePluginConfig engineConfig;
+  private final OpenTSDBSchemaFactory schemaFactory;
+
+  private final ServiceImpl db;
+
+  public OpenTSDBStoragePlugin(OpenTSDBStoragePluginConfig configuration, 
DrillbitContext context, String name) throws IOException {
+this.context = context;
+this.schemaFactory = new OpenTSDBSchemaFactory(this, name);
+this.engineConfig = configuration;
+this.db = new ServiceImpl("http://; + configuration.getConnection());
+  }
+
+  @Override
+  public void start() throws IOException {
--- End diff --

There is no need to override `start` and `close` methods since they have 
the same behavior as parent...


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224928#comment-16224928
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134172
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBRecordReader.java
 ---
@@ -0,0 +1,263 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.ValidationError;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.TypeProtos.MajorType;
+import org.apache.drill.common.types.TypeProtos.MinorType;
+import org.apache.drill.common.types.Types;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.expr.TypeHelper;
+import org.apache.drill.exec.ops.OperatorContext;
+import org.apache.drill.exec.physical.impl.OutputMutator;
+import org.apache.drill.exec.record.MaterializedField;
+import org.apache.drill.exec.store.AbstractRecordReader;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.apache.drill.exec.vector.NullableFloat8Vector;
+import org.apache.drill.exec.vector.NullableTimeStampVector;
+import org.apache.drill.exec.vector.NullableVarCharVector;
+import org.apache.drill.exec.vector.ValueVector;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+public class OpenTSDBRecordReader extends AbstractRecordReader {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBRecordReader.class);
+
+  private static final Map TYPES;
+
+  private Service db;
+
+  private Iterator tableIterator;
+  private OutputMutator output;
+  private ImmutableList projectedCols;
+  private OpenTSDBSubScan.OpenTSDBSubScanSpec subScanSpec;
+
+  OpenTSDBRecordReader(Service client, OpenTSDBSubScan.OpenTSDBSubScanSpec 
subScanSpec,
+   List projectedColumns) throws 
IOException {
+setColumns(projectedColumns);
+this.db = client;
+this.subScanSpec = subScanSpec;
+db.setupQueryParameters(subScanSpec.getTableName());
+log.debug("Scan spec: {}", subScanSpec);
+  }
+
+  @Override
+  public void setup(OperatorContext context, OutputMutator output) throws 
ExecutionSetupException {
+this.output = output;
+Set tables = db.getTablesFromDB();
+if (tables == null || tables.isEmpty()) {
+  throw new ValidationError(String.format("Table '%s' not found or 
it's empty", subScanSpec.getTableName()));
--- End diff --

1. If we didn't find table, it's fine if we fail but why do we return error 
if table is empty? Should not we just return empty result? What 
`db.getTablesFromDB();` actually returns? All tables from the db?
2. I have incorrectly indicated connection to the opentsdb. And when I have 
run the query `SELECT * FROM openTSDB.(metric=mymetric.stock)` and got 
validation error, though the real root cause was 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224956#comment-16224956
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147585085
  
--- Diff: 
contrib/storage-opentsdb/src/test/java/org/apache/drill/store/openTSDB/TestDataHolder.java
 ---
@@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.store.openTSDB;
+
+class TestDataHolder {
--- End diff --

public


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224918#comment-16224918
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133090
  
--- Diff: exec/java-exec/src/test/resources/drill-module.conf ---
@@ -10,7 +10,7 @@ drill: {
 packages += "org.apache.drill.exec.testing",
 packages += "org.apache.drill.exec.rpc.user.security.testing"
   }
-  test.query.printing.silent : false, 
+  test.query.printing.silent : false,
--- End diff --

I guess change in this file is not required...


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224919#comment-16224919
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133055
  
--- Diff: distribution/pom.xml ---
@@ -190,11 +190,21 @@
 
 
   org.apache.drill.contrib
+  drill-opentsdb-storage
--- End diff --

Why we have added `drill-opentsdb-storage` twice in this pom.xml?


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224921#comment-16224921
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133289
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/OpenTSDB.java
 ---
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client;
+
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import retrofit2.Call;
+import retrofit2.http.Body;
+import retrofit2.http.GET;
+import retrofit2.http.POST;
+import retrofit2.http.Query;
+
+import java.util.Set;
+
+/**
+ * Client for API requests to openTSDB
+ */
+public interface OpenTSDB {
+
+  /**
+   * Used for getting all metrics names from openTSDB
+   *
+   * @return Set with all tables names
+   */
+  @GET("api/suggest?type=metrics=999")
--- End diff --

Why max value is 999? What is we have more metrics? If it is required can 
we make it configurable?


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224951#comment-16224951
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147582647
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBRecordReader.java
 ---
@@ -0,0 +1,263 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.ValidationError;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.TypeProtos.MajorType;
+import org.apache.drill.common.types.TypeProtos.MinorType;
+import org.apache.drill.common.types.Types;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.expr.TypeHelper;
+import org.apache.drill.exec.ops.OperatorContext;
+import org.apache.drill.exec.physical.impl.OutputMutator;
+import org.apache.drill.exec.record.MaterializedField;
+import org.apache.drill.exec.store.AbstractRecordReader;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.apache.drill.exec.vector.NullableFloat8Vector;
+import org.apache.drill.exec.vector.NullableTimeStampVector;
+import org.apache.drill.exec.vector.NullableVarCharVector;
+import org.apache.drill.exec.vector.ValueVector;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+public class OpenTSDBRecordReader extends AbstractRecordReader {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBRecordReader.class);
+
+  private static final Map TYPES;
+
+  private Service db;
+
+  private Iterator tableIterator;
+  private OutputMutator output;
+  private ImmutableList projectedCols;
+  private OpenTSDBSubScan.OpenTSDBSubScanSpec subScanSpec;
+
+  OpenTSDBRecordReader(Service client, OpenTSDBSubScan.OpenTSDBSubScanSpec 
subScanSpec,
--- End diff --

public?


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224942#comment-16224942
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583241
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/Util.java
 ---
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.base.Splitter;
+
+import java.util.Map;
+
+public class Util {
+
+  /**
+   * Parse FROM parameters to Map representation
+   *
+   * @param rowData with this syntax (metric=warp.speed.test)
+   * @return Map with params key: metric, value: warp.speed.test
+   */
+  public static Map parseFROMRowData(String rowData) {
--- End diff --

`parseFromRowData`


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224941#comment-16224941
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583175
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBSubScan.java
 ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonTypeName;
+import com.google.common.base.Preconditions;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.StoragePluginConfig;
+import org.apache.drill.exec.physical.base.AbstractBase;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.physical.base.PhysicalVisitor;
+import org.apache.drill.exec.physical.base.SubScan;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+
+@JsonTypeName("openTSDB-tablet-scan")
+public class OpenTSDBSubScan extends AbstractBase implements SubScan {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(OpenTSDBSubScan.class);
+
+  @JsonProperty
+  public final OpenTSDBStoragePluginConfig storage;
+
+  private final List columns;
+  private final OpenTSDBStoragePlugin openTSDBStoragePlugin;
+  private final List tabletScanSpecList;
+
+  @JsonCreator
+  public OpenTSDBSubScan(@JacksonInject StoragePluginRegistry registry,
+ @JsonProperty("storage") StoragePluginConfig 
storage,
+ @JsonProperty("tabletScanSpecList") 
LinkedList tabletScanSpecList,
+ @JsonProperty("columns") List 
columns) throws ExecutionSetupException {
+super((String) null);
+openTSDBStoragePlugin = (OpenTSDBStoragePlugin) 
registry.getPlugin(storage);
+this.tabletScanSpecList = tabletScanSpecList;
+this.storage = (OpenTSDBStoragePluginConfig) storage;
+this.columns = columns;
+  }
+
+  public OpenTSDBSubScan(OpenTSDBStoragePlugin plugin, 
OpenTSDBStoragePluginConfig config,
+ List tabletInfoList, 
List columns) {
+super((String) null);
+openTSDBStoragePlugin = plugin;
+storage = config;
+this.tabletScanSpecList = tabletInfoList;
+this.columns = columns;
+  }
+
+  @Override
+  public int getOperatorType() {
+return 0;
+  }
+
+  @Override
+  public boolean isExecutable() {
+return false;
+  }
+
+  @Override
+  public PhysicalOperator getNewWithChildren(List 
children) throws ExecutionSetupException {
+Preconditions.checkArgument(children.isEmpty());
+return new OpenTSDBSubScan(openTSDBStoragePlugin, storage, 
tabletScanSpecList, columns);
+  }
+
+  @Override
+  public Iterator iterator() {
+return Collections.emptyIterator();
+  }
+
+  @Override
+  public  T accept(PhysicalVisitor 
physicalVisitor, X value) throws E {
+return physicalVisitor.visitSubScan(this, value);
+  }
+
+  public List getColumns() {
+return columns;
+  }
+
+  public List getTabletScanSpecList() {
+return tabletScanSpecList;
+  }
+
+  @JsonIgnore

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224954#comment-16224954
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147584562
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/schema/OpenTSDBSchemaFactory.java
 ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.schema;
+
+import com.google.common.collect.Maps;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.drill.exec.planner.logical.CreateTableEntry;
+import org.apache.drill.exec.planner.logical.DrillTable;
+import org.apache.drill.exec.planner.logical.DynamicDrillTable;
+import org.apache.drill.exec.store.AbstractSchema;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.SchemaFactory;
+import org.apache.drill.exec.store.openTSDB.DrillOpenTSDBTable;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePlugin;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePluginConfig;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Util.getValidTableName;
+
+public class OpenTSDBSchemaFactory implements SchemaFactory {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBSchemaFactory.class);
+
+  private final String schemaName;
+  private OpenTSDBStoragePlugin plugin;
+
+  public OpenTSDBSchemaFactory(OpenTSDBStoragePlugin plugin, String 
schemaName) {
+this.plugin = plugin;
+this.schemaName = schemaName;
+  }
+
+  @Override
+  public void registerSchemas(SchemaConfig schemaConfig, SchemaPlus 
parent) throws IOException {
+OpenTSDBTables schema = new OpenTSDBTables(schemaName);
+parent.add(schemaName, schema);
+  }
+
+  class OpenTSDBTables extends AbstractSchema {
+private final Map schemaMap = 
Maps.newHashMap();
+
+OpenTSDBTables(String name) {
+  super(Collections.emptyList(), name);
+}
+
+@Override
+public AbstractSchema getSubSchema(String name) {
+  Set tables;
+  if (!schemaMap.containsKey(name)) {
+tables = plugin.getClient().getAllTableNames();
+schemaMap.put(name, new OpenTSDBDatabaseSchema(tables, this, 
name));
+  }
+  return schemaMap.get(name);
+}
+
+@Override
+public Set getSubSchemaNames() {
+  return Collections.emptySet();
+}
+
+@Override
+public Table getTable(String name) {
+  OpenTSDBScanSpec scanSpec = new OpenTSDBScanSpec(name);
+  name = getValidTableName(name);
+  try {
+return new DrillOpenTSDBTable(schemaName, plugin, new 
Schema(plugin.getClient(), name), scanSpec);
+  } catch (Exception e) {
+log.warn("Failure while retrieving openTSDB table {}", name, e);
+return null;
+  }
+}
+
+@Override
+public Set getTableNames() {
+  return plugin.getClient().getAllTableNames();
+}
+
+@Override
+public CreateTableEntry createNewTable(final String tableName, 
List partitionColumns) {
+  return null;
--- End diff --

Parent behavior is to fail indication that table creation is not allowed. I 
guess we can stick to it rather then returning null.


> OpenTSDB storage 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224948#comment-16224948
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583562
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/query/Query.java
 ---
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.query;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import static 
org.apache.drill.exec.store.openTSDB.Constants.SUM_AGGREGATOR;
+
+/**
+ * Query is an abstraction of openTSDB subQuery
+ * and it is integral part of DBQuery
+ * 
+ * Each sub query can retrieve individual or groups of timeseries data,
+ * performing aggregation on each set.
+ */
+public class Query {
+
+  /**
+   * The name of an aggregation function to use.
+   */
+  private String aggregator;
+  /**
+   * The name of a metric stored in the system
+   */
+  private String metric;
+  /**
+   * Whether or not the data should be converted into deltas before 
returning.
+   * This is useful if the metric is a continuously incrementing counter
+   * and you want to view the rate of change between data points.
+   */
+  private String rate;
+  /**
+   * An optional downsampling function to reduce the amount of data 
returned.
+   */
+  private String downsample;
+  /**
+   * To drill down to specific timeseries or group results by tag,
+   * supply one or more map values in the same format as the query string.
+   */
+  private Map tags;
+
+  private Query(Builder builder) {
+this.aggregator = builder.aggregator;
+this.metric = builder.metric;
+this.rate = builder.rate;
+this.downsample = builder.downsample;
+this.tags = builder.tags;
+  }
+
+  public String getAggregator() {
+return aggregator;
+  }
+
+  public String getMetric() {
+return metric;
+  }
+
+  public String getRate() {
+return rate;
+  }
+
+  public String getDownsample() {
+return downsample;
+  }
+
+  public Map getTags() {
+return tags;
+  }
+
+  public static class Builder {
+
+private String aggregator = SUM_AGGREGATOR;
--- End diff --

Aggregation is required parameter [1], should fail and ask user to provide 
one or even suggest list of available aggragators rather then using made up 
default. Taking into account OpenTSDB specifics it's rather confusing allowing 
the following queries:
```
SELECT * FROM openTSDB.`warp.speed.test`
```

[1] http://opentsdb.net/docs/build/html/api_http/query/index.html


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224929#comment-16224929
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133035
  
--- Diff: contrib/storage-opentsdb/src/test/resources/logback.xml ---
@@ -0,0 +1,64 @@
+
--- End diff --

Now in Drill we have common logback-test.xml for all modules, please use it 
instead of creating separate one.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224932#comment-16224932
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134712
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBRecordReader.java
 ---
@@ -0,0 +1,263 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.ValidationError;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.TypeProtos.MajorType;
+import org.apache.drill.common.types.TypeProtos.MinorType;
+import org.apache.drill.common.types.Types;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.expr.TypeHelper;
+import org.apache.drill.exec.ops.OperatorContext;
+import org.apache.drill.exec.physical.impl.OutputMutator;
+import org.apache.drill.exec.record.MaterializedField;
+import org.apache.drill.exec.store.AbstractRecordReader;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.apache.drill.exec.vector.NullableFloat8Vector;
+import org.apache.drill.exec.vector.NullableTimeStampVector;
+import org.apache.drill.exec.vector.NullableVarCharVector;
+import org.apache.drill.exec.vector.ValueVector;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+public class OpenTSDBRecordReader extends AbstractRecordReader {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBRecordReader.class);
+
+  private static final Map TYPES;
+
+  private Service db;
+
+  private Iterator tableIterator;
+  private OutputMutator output;
+  private ImmutableList projectedCols;
+  private OpenTSDBSubScan.OpenTSDBSubScanSpec subScanSpec;
+
+  OpenTSDBRecordReader(Service client, OpenTSDBSubScan.OpenTSDBSubScanSpec 
subScanSpec,
+   List projectedColumns) throws 
IOException {
+setColumns(projectedColumns);
+this.db = client;
+this.subScanSpec = subScanSpec;
+db.setupQueryParameters(subScanSpec.getTableName());
+log.debug("Scan spec: {}", subScanSpec);
+  }
+
+  @Override
+  public void setup(OperatorContext context, OutputMutator output) throws 
ExecutionSetupException {
+this.output = output;
+Set tables = db.getTablesFromDB();
+if (tables == null || tables.isEmpty()) {
+  throw new ValidationError(String.format("Table '%s' not found or 
it's empty", subScanSpec.getTableName()));
+}
+this.tableIterator = tables.iterator();
+  }
+
+  @Override
+  public int next() {
+try {
+  return processOpenTSDBTablesData();
+} catch (SchemaChangeException e) {
+  log.info(e.toString());
+  return 0;
+}
+  }
+
+  @Override
+  protected boolean isSkipQuery() {
+return super.isSkipQuery();
+  }
+
+  @Override
+  

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224934#comment-16224934
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134878
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/schema/OpenTSDBSchemaFactory.java
 ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.schema;
+
+import com.google.common.collect.Maps;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.drill.exec.planner.logical.CreateTableEntry;
+import org.apache.drill.exec.planner.logical.DrillTable;
+import org.apache.drill.exec.planner.logical.DynamicDrillTable;
+import org.apache.drill.exec.store.AbstractSchema;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.SchemaFactory;
+import org.apache.drill.exec.store.openTSDB.DrillOpenTSDBTable;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePlugin;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePluginConfig;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Util.getValidTableName;
+
+public class OpenTSDBSchemaFactory implements SchemaFactory {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBSchemaFactory.class);
+
+  private final String schemaName;
+  private OpenTSDBStoragePlugin plugin;
+
+  public OpenTSDBSchemaFactory(OpenTSDBStoragePlugin plugin, String 
schemaName) {
+this.plugin = plugin;
+this.schemaName = schemaName;
+  }
+
+  @Override
+  public void registerSchemas(SchemaConfig schemaConfig, SchemaPlus 
parent) throws IOException {
+OpenTSDBTables schema = new OpenTSDBTables(schemaName);
+parent.add(schemaName, schema);
+  }
+
+  class OpenTSDBTables extends AbstractSchema {
+private final Map schemaMap = 
Maps.newHashMap();
+
+OpenTSDBTables(String name) {
+  super(Collections.emptyList(), name);
+}
+
+@Override
+public AbstractSchema getSubSchema(String name) {
+  Set tables;
+  if (!schemaMap.containsKey(name)) {
+tables = plugin.getClient().getAllTableNames();
+schemaMap.put(name, new OpenTSDBDatabaseSchema(tables, this, 
name));
+  }
+  return schemaMap.get(name);
+}
+
+@Override
+public Set getSubSchemaNames() {
+  return Collections.emptySet();
+}
+
+@Override
+public Table getTable(String name) {
+  OpenTSDBScanSpec scanSpec = new OpenTSDBScanSpec(name);
+  name = getValidTableName(name);
+  try {
+return new DrillOpenTSDBTable(schemaName, plugin, new 
Schema(plugin.getClient(), name), scanSpec);
+  } catch (Exception e) {
+log.warn("Failure while retrieving openTSDB table {}", name, e);
--- End diff --

What will happen if we return null? Will user get meaningful error? Maybe 
we should fail here? Or at least log as error rather then warning?


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224923#comment-16224923
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134226
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/services/ServiceImpl.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.services;
+
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDB;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.client.query.Query;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import retrofit2.Retrofit;
+import retrofit2.converter.jackson.JacksonConverterFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.AGGREGATOR;
+import static org.apache.drill.exec.store.openTSDB.Constants.DOWNSAMPLE;
+import static org.apache.drill.exec.store.openTSDB.Constants.METRIC;
+import static org.apache.drill.exec.store.openTSDB.Util.isTableNameValid;
+import static org.apache.drill.exec.store.openTSDB.Util.parseFROMRowData;
+
+public class ServiceImpl implements Service {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(ServiceImpl.class);
+
+  private OpenTSDB client;
+  private Map queryParameters;
+
+  public ServiceImpl(String connectionURL) {
+this.client = new Retrofit.Builder()
+.baseUrl(connectionURL)
+.addConverterFactory(JacksonConverterFactory.create())
+.build()
+.create(OpenTSDB.class);
+  }
+
+  @Override
+  public Set getTablesFromDB() {
+return getAllMetricsByTags();
+  }
+
+  @Override
+  public Set getAllTableNames() {
+return getTableNames();
+  }
+
+  @Override
+  public List getUnfixedColumnsToSchema() {
+Set tables = getAllMetricsByTags();
+List unfixedColumns = new ArrayList<>();
+
+for (MetricDTO table : tables) {
+  for (String tag : table.getTags().keySet()) {
+ColumnDTO tmp = new ColumnDTO(tag, OpenTSDBTypes.STRING);
+if (!unfixedColumns.contains(tmp)) {
+  unfixedColumns.add(tmp);
+}
+  }
+}
+return unfixedColumns;
+  }
+
+  @Override
+  public void setupQueryParameters(String rowData) {
+if (!isTableNameValid(rowData)) {
+  this.queryParameters = parseFROMRowData(rowData);
+} else {
+  Map params = new HashMap<>();
+  params.put(METRIC, rowData);
+  this.queryParameters = params;
+}
+  }
+
+  private Set getAllMetricsByTags() {
+try {
+  return getAllMetricsFromDBByTags();
+} catch (IOException e) {
+  logIOException(e);
+  return Collections.emptySet();
+}
+  }
+
+  private Set getTableNames() {
+try {
+  return client.getAllTablesName().execute().body();
+} catch (IOException e) {
+  e.printStackTrace();
+  return Collections.emptySet();
--- End diff --

1. Why do we return empty collection on error?
2. `e.printStackTrace();`?


> OpenTSDB storage plugin
> 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224917#comment-16224917
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133068
  
--- Diff: distribution/src/assemble/bin.xml ---
@@ -99,7 +99,9 @@
 org.apache.drill.contrib:drill-format-mapr
 org.apache.drill.contrib:drill-jdbc-storage
 org.apache.drill.contrib:drill-kudu-storage
+org.apache.drill.contrib:drill-opentsdb-storage
--- End diff --

Again we have 
`org.apache.drill.contrib:drill-opentsdb-storage` twice in 
here...


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224935#comment-16224935
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146135253
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBStoragePlugin.java
 ---
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.drill.common.JSONOptions;
+import org.apache.drill.exec.server.DrillbitContext;
+import org.apache.drill.exec.store.AbstractStoragePlugin;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.openTSDB.client.services.ServiceImpl;
+import org.apache.drill.exec.store.openTSDB.schema.OpenTSDBSchemaFactory;
+
+import java.io.IOException;
+
+public class OpenTSDBStoragePlugin extends AbstractStoragePlugin {
+
+  private final DrillbitContext context;
+
+  private final OpenTSDBStoragePluginConfig engineConfig;
+  private final OpenTSDBSchemaFactory schemaFactory;
+
+  private final ServiceImpl db;
+
+  public OpenTSDBStoragePlugin(OpenTSDBStoragePluginConfig configuration, 
DrillbitContext context, String name) throws IOException {
+this.context = context;
+this.schemaFactory = new OpenTSDBSchemaFactory(this, name);
+this.engineConfig = configuration;
+this.db = new ServiceImpl("http://; + configuration.getConnection());
--- End diff --

Can it be `https`? May be user should be responsible of setting this in 
config?


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224939#comment-16224939
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147582988
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBSubScan.java
 ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonTypeName;
+import com.google.common.base.Preconditions;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.StoragePluginConfig;
+import org.apache.drill.exec.physical.base.AbstractBase;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.physical.base.PhysicalVisitor;
+import org.apache.drill.exec.physical.base.SubScan;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+
+@JsonTypeName("openTSDB-tablet-scan")
+public class OpenTSDBSubScan extends AbstractBase implements SubScan {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(OpenTSDBSubScan.class);
+
+  @JsonProperty
+  public final OpenTSDBStoragePluginConfig storage;
+
+  private final List columns;
+  private final OpenTSDBStoragePlugin openTSDBStoragePlugin;
+  private final List tabletScanSpecList;
+
+  @JsonCreator
+  public OpenTSDBSubScan(@JacksonInject StoragePluginRegistry registry,
+ @JsonProperty("storage") StoragePluginConfig 
storage,
+ @JsonProperty("tabletScanSpecList") 
LinkedList tabletScanSpecList,
+ @JsonProperty("columns") List 
columns) throws ExecutionSetupException {
+super((String) null);
+openTSDBStoragePlugin = (OpenTSDBStoragePlugin) 
registry.getPlugin(storage);
+this.tabletScanSpecList = tabletScanSpecList;
+this.storage = (OpenTSDBStoragePluginConfig) storage;
--- End diff --

Should consider using ` @JsonProperty("storage") 
OpenTSDBStoragePluginConfig storage` instead of casting.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224925#comment-16224925
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133808
  
--- Diff: 
contrib/storage-opentsdb/src/main/resources/bootstrap-storage-plugins.json ---
@@ -0,0 +1,9 @@
+{
+  "storage": {
+openTSDB: {
+  type: "openTSDB",
+  connection: "1.2.3.4:4242",
+  enabled: true
--- End diff --

1. Can user have secure connection?
2. I guess it would be better if by default this storage plugin will be 
disabled.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224937#comment-16224937
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146135479
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/DrillOpenTSDBTable.java
 ---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.Lists;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.sql.type.SqlTypeName;
+import org.apache.drill.exec.planner.logical.DynamicDrillTable;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+
+import java.util.List;
+
+public class DrillOpenTSDBTable extends DynamicDrillTable {
+
+  private final Schema schema;
+
+  public DrillOpenTSDBTable(String storageEngineName, 
OpenTSDBStoragePlugin plugin, Schema schema, OpenTSDBScanSpec scanSpec) {
+super(plugin, storageEngineName, scanSpec);
+this.schema = schema;
+  }
+
+  @Override
+  public RelDataType getRowType(final RelDataTypeFactory typeFactory) {
+List names = Lists.newArrayList();
+List types = Lists.newArrayList();
+convertToRelDataType(typeFactory, names, types);
+return typeFactory.createStructType(types, names);
+  }
+
+  private void convertToRelDataType(RelDataTypeFactory typeFactory, 
List names, List types) {
+for (ColumnDTO column : schema.getColumns()) {
+  names.add(column.getColumnName());
+  RelDataType type = getSqlTypeFromOpenTSDBType(typeFactory, 
column.getColumnType());
+  type = typeFactory.createTypeWithNullability(type, 
column.isNullable());
+  types.add(type);
+}
+  }
+
+  private RelDataType getSqlTypeFromOpenTSDBType(RelDataTypeFactory 
typeFactory, OpenTSDBTypes type) {
+switch (type) {
+  case STRING:
+return typeFactory.createSqlType(SqlTypeName.VARCHAR, 
Integer.MAX_VALUE);
+  case DOUBLE:
+return typeFactory.createSqlType(SqlTypeName.DOUBLE);
+  case TIMESTAMP:
+return typeFactory.createSqlType(SqlTypeName.TIMESTAMP);
+  default:
+throw new UnsupportedOperationException("Unsupported type.");
--- End diff --

I guess it's better to add error message indicating which type is 
unsupported and which types are currently supported.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224933#comment-16224933
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146135112
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBGroupScan.java
 ---
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonTypeName;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.ListMultimap;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.exec.physical.EndpointAffinity;
+import org.apache.drill.exec.physical.base.AbstractGroupScan;
+import org.apache.drill.exec.physical.base.GroupScan;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.physical.base.ScanStats;
+import org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import 
org.apache.drill.exec.store.openTSDB.OpenTSDBSubScan.OpenTSDBSubScanSpec;
+import org.apache.drill.exec.store.schedule.AffinityCreator;
+import org.apache.drill.exec.store.schedule.AssignmentCreator;
+import org.apache.drill.exec.store.schedule.CompleteWork;
+import org.apache.drill.exec.store.schedule.EndpointByteMap;
+import org.apache.drill.exec.store.schedule.EndpointByteMapImpl;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+
+@JsonTypeName("openTSDB-scan")
+public class OpenTSDBGroupScan extends AbstractGroupScan {
+
+  private static final long DEFAULT_TABLET_SIZE = 1000;
+
+  private OpenTSDBStoragePluginConfig storagePluginConfig;
+  private OpenTSDBScanSpec openTSDBScanSpec;
+  private OpenTSDBStoragePlugin storagePlugin;
+
+  private ListMultimap assignments;
+  private List columns;
+  private List openTSDBWorkList = Lists.newArrayList();
+  private List affinities;
+
+  private boolean filterPushedDown = false;
+
+  @JsonCreator
+  public OpenTSDBGroupScan(@JsonProperty("openTSDBScanSpec") 
OpenTSDBScanSpec openTSDBScanSpec,
+   @JsonProperty("storage") 
OpenTSDBStoragePluginConfig openTSDBStoragePluginConfig,
+   @JsonProperty("columns") List 
columns,
+   @JacksonInject StoragePluginRegistry 
pluginRegistry) throws IOException, ExecutionSetupException {
+this((OpenTSDBStoragePlugin) 
pluginRegistry.getPlugin(openTSDBStoragePluginConfig), openTSDBScanSpec, 
columns);
+  }
+
+  public OpenTSDBGroupScan(OpenTSDBStoragePlugin storagePlugin,
+   OpenTSDBScanSpec scanSpec, List 
columns) {
+super((String) null);
+this.storagePlugin = storagePlugin;
+this.storagePluginConfig = storagePlugin.getConfig();
+this.openTSDBScanSpec = scanSpec;
+this.columns = columns == null || columns.size() == 0 ? ALL_COLUMNS : 
columns;
+init();
+  }
+
+  /**
+   * Private constructor, used for cloning.
+   *
+   * @param that The OpenTSDBGroupScan to clone
+   */
+  private OpenTSDBGroupScan(OpenTSDBGroupScan that) {
+super((String) null);
+this.columns = that.columns;
 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224936#comment-16224936
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147582378
  
--- Diff: contrib/storage-opentsdb/README.md ---
@@ -0,0 +1 @@
+# drill-storage-openTSDB
--- End diff --

It would be nice if you add some information how to query data, etc, 
similar that is added in the pull request description.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224930#comment-16224930
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134207
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/services/ServiceImpl.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.services;
+
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDB;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.client.query.Query;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import retrofit2.Retrofit;
+import retrofit2.converter.jackson.JacksonConverterFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.AGGREGATOR;
+import static org.apache.drill.exec.store.openTSDB.Constants.DOWNSAMPLE;
+import static org.apache.drill.exec.store.openTSDB.Constants.METRIC;
+import static org.apache.drill.exec.store.openTSDB.Util.isTableNameValid;
+import static org.apache.drill.exec.store.openTSDB.Util.parseFROMRowData;
+
+public class ServiceImpl implements Service {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(ServiceImpl.class);
+
+  private OpenTSDB client;
+  private Map queryParameters;
+
+  public ServiceImpl(String connectionURL) {
+this.client = new Retrofit.Builder()
+.baseUrl(connectionURL)
+.addConverterFactory(JacksonConverterFactory.create())
+.build()
+.create(OpenTSDB.class);
+  }
+
+  @Override
+  public Set getTablesFromDB() {
+return getAllMetricsByTags();
+  }
+
+  @Override
+  public Set getAllTableNames() {
+return getTableNames();
+  }
+
+  @Override
+  public List getUnfixedColumnsToSchema() {
+Set tables = getAllMetricsByTags();
+List unfixedColumns = new ArrayList<>();
+
+for (MetricDTO table : tables) {
+  for (String tag : table.getTags().keySet()) {
+ColumnDTO tmp = new ColumnDTO(tag, OpenTSDBTypes.STRING);
+if (!unfixedColumns.contains(tmp)) {
+  unfixedColumns.add(tmp);
+}
+  }
+}
+return unfixedColumns;
+  }
+
+  @Override
+  public void setupQueryParameters(String rowData) {
+if (!isTableNameValid(rowData)) {
+  this.queryParameters = parseFROMRowData(rowData);
+} else {
+  Map params = new HashMap<>();
+  params.put(METRIC, rowData);
+  this.queryParameters = params;
+}
+  }
+
+  private Set getAllMetricsByTags() {
+try {
+  return getAllMetricsFromDBByTags();
+} catch (IOException e) {
+  logIOException(e);
+  return Collections.emptySet();
+}
+  }
+
+  private Set getTableNames() {
+try {
+  return client.getAllTablesName().execute().body();
+} catch (IOException e) {
+  e.printStackTrace();
+  return Collections.emptySet();
+}
+  }
+
+  private Set getAllTablesWithSpecialTag(DBQuery base) throws 
IOException {
+return 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224931#comment-16224931
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146135440
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBGroupScan.java
 ---
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonTypeName;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.ListMultimap;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.exec.physical.EndpointAffinity;
+import org.apache.drill.exec.physical.base.AbstractGroupScan;
+import org.apache.drill.exec.physical.base.GroupScan;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.physical.base.ScanStats;
+import org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import 
org.apache.drill.exec.store.openTSDB.OpenTSDBSubScan.OpenTSDBSubScanSpec;
+import org.apache.drill.exec.store.schedule.AffinityCreator;
+import org.apache.drill.exec.store.schedule.AssignmentCreator;
+import org.apache.drill.exec.store.schedule.CompleteWork;
+import org.apache.drill.exec.store.schedule.EndpointByteMap;
+import org.apache.drill.exec.store.schedule.EndpointByteMapImpl;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+
+@JsonTypeName("openTSDB-scan")
+public class OpenTSDBGroupScan extends AbstractGroupScan {
+
+  private static final long DEFAULT_TABLET_SIZE = 1000;
+
+  private OpenTSDBStoragePluginConfig storagePluginConfig;
+  private OpenTSDBScanSpec openTSDBScanSpec;
+  private OpenTSDBStoragePlugin storagePlugin;
+
+  private ListMultimap assignments;
+  private List columns;
+  private List openTSDBWorkList = Lists.newArrayList();
+  private List affinities;
+
+  private boolean filterPushedDown = false;
+
+  @JsonCreator
+  public OpenTSDBGroupScan(@JsonProperty("openTSDBScanSpec") 
OpenTSDBScanSpec openTSDBScanSpec,
+   @JsonProperty("storage") 
OpenTSDBStoragePluginConfig openTSDBStoragePluginConfig,
+   @JsonProperty("columns") List 
columns,
--- End diff --

How this class is deserialized if you don't have getters for `storage` and 
`columns`?
One of the options to test serde is to create physical plan for your query 
and then submit physical plan (similar approach was used in 
https://github.com/apache/drill/pull/1014).


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224920#comment-16224920
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134773
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBRecordReader.java
 ---
@@ -0,0 +1,263 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.ValidationError;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.TypeProtos.MajorType;
+import org.apache.drill.common.types.TypeProtos.MinorType;
+import org.apache.drill.common.types.Types;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.expr.TypeHelper;
+import org.apache.drill.exec.ops.OperatorContext;
+import org.apache.drill.exec.physical.impl.OutputMutator;
+import org.apache.drill.exec.record.MaterializedField;
+import org.apache.drill.exec.store.AbstractRecordReader;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.apache.drill.exec.vector.NullableFloat8Vector;
+import org.apache.drill.exec.vector.NullableTimeStampVector;
+import org.apache.drill.exec.vector.NullableVarCharVector;
+import org.apache.drill.exec.vector.ValueVector;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+public class OpenTSDBRecordReader extends AbstractRecordReader {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBRecordReader.class);
+
+  private static final Map TYPES;
+
+  private Service db;
+
+  private Iterator tableIterator;
+  private OutputMutator output;
+  private ImmutableList projectedCols;
+  private OpenTSDBSubScan.OpenTSDBSubScanSpec subScanSpec;
+
+  OpenTSDBRecordReader(Service client, OpenTSDBSubScan.OpenTSDBSubScanSpec 
subScanSpec,
+   List projectedColumns) throws 
IOException {
+setColumns(projectedColumns);
+this.db = client;
+this.subScanSpec = subScanSpec;
+db.setupQueryParameters(subScanSpec.getTableName());
+log.debug("Scan spec: {}", subScanSpec);
+  }
+
+  @Override
+  public void setup(OperatorContext context, OutputMutator output) throws 
ExecutionSetupException {
+this.output = output;
+Set tables = db.getTablesFromDB();
+if (tables == null || tables.isEmpty()) {
+  throw new ValidationError(String.format("Table '%s' not found or 
it's empty", subScanSpec.getTableName()));
+}
+this.tableIterator = tables.iterator();
+  }
+
+  @Override
+  public int next() {
+try {
+  return processOpenTSDBTablesData();
+} catch (SchemaChangeException e) {
+  log.info(e.toString());
--- End diff --

I guess we should fail on `SchemaChangeException` indicating we don't 
support it rather then returning 0?


> OpenTSDB storage plugin
> 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224924#comment-16224924
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133164
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/Constants.java
 ---
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+public class Constants {
--- End diff --

interface?


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224945#comment-16224945
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583342
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/query/DBQuery.java
 ---
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.query;
+
+import java.util.HashSet;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.DEFAULT_TIME;
+
+/**
+ * DBQuery is an abstraction of an openTSDB query,
+ * that used for extracting data from the storage system by POST request 
to DB.
+ * 
+ * An OpenTSDB query requires at least one sub query,
+ * a means of selecting which time series should be included in the result 
set.
+ */
+public class DBQuery {
+
+  /**
+   * The start time for the query. This can be a relative or absolute 
timestamp.
+   */
+  private String start;
+  /**
+   * One or more sub subQueries used to select the time series to return.
+   */
+  private Set queries;
+
+  private DBQuery(Builder builder) {
+this.start = builder.start;
+this.queries = builder.queries;
+  }
+
+  public String getStart() {
+return start;
+  }
+
+  public Set getQueries() {
+return queries;
+  }
+
+  public static class Builder {
+
+private String start = DEFAULT_TIME;
--- End diff --

According to the documentation start time is required parameter, I we 
should not use our made up default but rather fail and ask user to provide 
start time.

[1] http://opentsdb.net/docs/build/html/api_http/query/index.html


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224927#comment-16224927
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146135322
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBGroupScan.java
 ---
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonTypeName;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.ListMultimap;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.exec.physical.EndpointAffinity;
+import org.apache.drill.exec.physical.base.AbstractGroupScan;
+import org.apache.drill.exec.physical.base.GroupScan;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.physical.base.ScanStats;
+import org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import 
org.apache.drill.exec.store.openTSDB.OpenTSDBSubScan.OpenTSDBSubScanSpec;
+import org.apache.drill.exec.store.schedule.AffinityCreator;
+import org.apache.drill.exec.store.schedule.AssignmentCreator;
+import org.apache.drill.exec.store.schedule.CompleteWork;
+import org.apache.drill.exec.store.schedule.EndpointByteMap;
+import org.apache.drill.exec.store.schedule.EndpointByteMapImpl;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+
+@JsonTypeName("openTSDB-scan")
+public class OpenTSDBGroupScan extends AbstractGroupScan {
+
+  private static final long DEFAULT_TABLET_SIZE = 1000;
+
+  private OpenTSDBStoragePluginConfig storagePluginConfig;
+  private OpenTSDBScanSpec openTSDBScanSpec;
+  private OpenTSDBStoragePlugin storagePlugin;
+
+  private ListMultimap assignments;
+  private List columns;
+  private List openTSDBWorkList = Lists.newArrayList();
+  private List affinities;
+
+  private boolean filterPushedDown = false;
+
+  @JsonCreator
+  public OpenTSDBGroupScan(@JsonProperty("openTSDBScanSpec") 
OpenTSDBScanSpec openTSDBScanSpec,
+   @JsonProperty("storage") 
OpenTSDBStoragePluginConfig openTSDBStoragePluginConfig,
+   @JsonProperty("columns") List 
columns,
+   @JacksonInject StoragePluginRegistry 
pluginRegistry) throws IOException, ExecutionSetupException {
+this((OpenTSDBStoragePlugin) 
pluginRegistry.getPlugin(openTSDBStoragePluginConfig), openTSDBScanSpec, 
columns);
+  }
+
+  public OpenTSDBGroupScan(OpenTSDBStoragePlugin storagePlugin,
+   OpenTSDBScanSpec scanSpec, List 
columns) {
+super((String) null);
+this.storagePlugin = storagePlugin;
+this.storagePluginConfig = storagePlugin.getConfig();
+this.openTSDBScanSpec = scanSpec;
+this.columns = columns == null || columns.size() == 0 ? ALL_COLUMNS : 
columns;
+init();
+  }
+
+  /**
+   * Private constructor, used for cloning.
+   *
+   * @param that The OpenTSDBGroupScan to clone
+   */
+  private OpenTSDBGroupScan(OpenTSDBGroupScan that) {
+super((String) null);
+this.columns = that.columns;
 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224940#comment-16224940
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146135774
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBScanSpec.java
 ---
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+public class OpenTSDBScanSpec {
+
+  private final String tableName;
+
+  @JsonCreator
+  public OpenTSDBScanSpec(@JsonProperty("tableName") String tableName) {
+this.tableName = tableName;
+  }
+
+  public String getTableName() {
+return tableName;
+  }
--- End diff --

If you'll do expain plan for the query:
```
0: jdbc:drill:drillbit=localhost> explain plan for SELECT * FROM 
openTSDB.`(metric=mymetric.stock)`;
+--+--+
| text | json |
+--+--+
| 00-00Screen
00-01  Project(metric=[$0], aggregate tags=[$1], timestamp=[$2], 
aggregated value=[$3], symbol=[$4])
00-02Scan(groupscan=[OpenTSDBGroupScan 
[OpenTSDBScanSpec=org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec@57979642]])
 | {
  "head" : {
"version" : 1,
"generator" : {
  "type" : "ExplainHandler",
  "info" : ""
},
"type" : "APACHE_DRILL_PHYSICAL",
"options" : [ ],
"queue" : 0,
"hasResourcePlan" : false,
"resultMode" : "EXEC"
  },
  "graph" : [ {
"pop" : "openTSDB-scan",
"@id" : 2,
"openTSDBScanSpec" : {
  "tableName" : "(metric=mymetric.stock)"
},
"cost" : 0.0
  }, {
"pop" : "project",
"@id" : 1,
"exprs" : [ {
  "ref" : "`metric`",
  "expr" : "`metric`"
}, {
  "ref" : "`aggregate tags`",
  "expr" : "`aggregate tags`"
}, {
  "ref" : "`timestamp`",
  "expr" : "`timestamp`"
}, {
  "ref" : "`aggregated value`",
  "expr" : "`aggregated value`"
}, {
  "ref" : "`symbol`",
  "expr" : "`symbol`"
} ],
"child" : 2,
"outputProj" : true,
"initialAllocation" : 100,
"maxAllocation" : 100,
"cost" : 1.0
  }, {
"pop" : "screen",
"@id" : 0,
"child" : 1,
"initialAllocation" : 100,
"maxAllocation" : 100,
"cost" : 1.0
  } ]
} |
+--+--+
```
you'll see 
`OpenTSDBScanSpec=org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec@57979642`.
I guess we might want to add `toString()` method in this class.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224955#comment-16224955
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147584976
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/schema/OpenTSDBSchemaFactory.java
 ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.schema;
+
+import com.google.common.collect.Maps;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.drill.exec.planner.logical.CreateTableEntry;
+import org.apache.drill.exec.planner.logical.DrillTable;
+import org.apache.drill.exec.planner.logical.DynamicDrillTable;
+import org.apache.drill.exec.store.AbstractSchema;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.SchemaFactory;
+import org.apache.drill.exec.store.openTSDB.DrillOpenTSDBTable;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePlugin;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePluginConfig;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Util.getValidTableName;
+
+public class OpenTSDBSchemaFactory implements SchemaFactory {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBSchemaFactory.class);
+
+  private final String schemaName;
+  private OpenTSDBStoragePlugin plugin;
+
+  public OpenTSDBSchemaFactory(OpenTSDBStoragePlugin plugin, String 
schemaName) {
+this.plugin = plugin;
+this.schemaName = schemaName;
+  }
+
+  @Override
+  public void registerSchemas(SchemaConfig schemaConfig, SchemaPlus 
parent) throws IOException {
+OpenTSDBTables schema = new OpenTSDBTables(schemaName);
+parent.add(schemaName, schema);
+  }
+
+  class OpenTSDBTables extends AbstractSchema {
+private final Map schemaMap = 
Maps.newHashMap();
+
+OpenTSDBTables(String name) {
+  super(Collections.emptyList(), name);
+}
+
+@Override
+public AbstractSchema getSubSchema(String name) {
+  Set tables;
--- End diff --

I don' think OpenTSDB has subschemas. For example, `dfs.tmp.table`, tmp is 
subschema here. In OpenTSDB we refer to metric as to table but not as 
subschema. This method should return empty collection, he same as 
`getSubSchemaNames()`.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> 

[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224922#comment-16224922
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134351
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/services/ServiceImpl.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.services;
+
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDB;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.client.query.Query;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import retrofit2.Retrofit;
+import retrofit2.converter.jackson.JacksonConverterFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.AGGREGATOR;
+import static org.apache.drill.exec.store.openTSDB.Constants.DOWNSAMPLE;
+import static org.apache.drill.exec.store.openTSDB.Constants.METRIC;
+import static org.apache.drill.exec.store.openTSDB.Util.isTableNameValid;
+import static org.apache.drill.exec.store.openTSDB.Util.parseFROMRowData;
+
+public class ServiceImpl implements Service {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(ServiceImpl.class);
+
+  private OpenTSDB client;
--- End diff --

final?


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: features
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224926#comment-16224926
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146135302
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBRecordReader.java
 ---
@@ -0,0 +1,263 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.ValidationError;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.TypeProtos.MajorType;
+import org.apache.drill.common.types.TypeProtos.MinorType;
+import org.apache.drill.common.types.Types;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.expr.TypeHelper;
+import org.apache.drill.exec.ops.OperatorContext;
+import org.apache.drill.exec.physical.impl.OutputMutator;
+import org.apache.drill.exec.record.MaterializedField;
+import org.apache.drill.exec.store.AbstractRecordReader;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.apache.drill.exec.vector.NullableFloat8Vector;
+import org.apache.drill.exec.vector.NullableTimeStampVector;
+import org.apache.drill.exec.vector.NullableVarCharVector;
+import org.apache.drill.exec.vector.ValueVector;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+public class OpenTSDBRecordReader extends AbstractRecordReader {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBRecordReader.class);
+
+  private static final Map TYPES;
+
+  private Service db;
+
+  private Iterator tableIterator;
+  private OutputMutator output;
+  private ImmutableList projectedCols;
+  private OpenTSDBSubScan.OpenTSDBSubScanSpec subScanSpec;
+
+  OpenTSDBRecordReader(Service client, OpenTSDBSubScan.OpenTSDBSubScanSpec 
subScanSpec,
+   List projectedColumns) throws 
IOException {
+setColumns(projectedColumns);
+this.db = client;
+this.subScanSpec = subScanSpec;
+db.setupQueryParameters(subScanSpec.getTableName());
+log.debug("Scan spec: {}", subScanSpec);
+  }
+
+  @Override
+  public void setup(OperatorContext context, OutputMutator output) throws 
ExecutionSetupException {
+this.output = output;
+Set tables = db.getTablesFromDB();
+if (tables == null || tables.isEmpty()) {
+  throw new ValidationError(String.format("Table '%s' not found or 
it's empty", subScanSpec.getTableName()));
+}
+this.tableIterator = tables.iterator();
+  }
+
+  @Override
+  public int next() {
+try {
+  return processOpenTSDBTablesData();
+} catch (SchemaChangeException e) {
+  log.info(e.toString());
+  return 0;
+}
+  }
+
+  @Override
+  protected boolean isSkipQuery() {
+return super.isSkipQuery();
+  }
+
+  @Override
+  

[jira] [Updated] (DRILL-5768) Drill planer should not allow select * with group by clause

2017-10-30 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5768:

Affects Version/s: 1.11.0

> Drill planer should not allow select * with group by clause
> ---
>
> Key: DRILL-5768
> URL: https://issues.apache.org/jira/browse/DRILL-5768
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Jinfeng Ni
>Assignee: Roman Kulyk
>
> The following query should not be allowed in Drill planner.
> {code}
> select * from cp.`tpch/nation.parquet` group by n_regionkey;
> ++
> | *  |
> ++
> | 0  |
> | 1  |
> | 4  |
> | 3  |
> | 2  |
> ++
> {code}
> However, Drill allow such query to run and even worse the result is 
> incorrect.  It would make sense that we block such type of query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (DRILL-5768) Drill planer should not allow select * with group by clause

2017-10-30 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva reassigned DRILL-5768:
---

Assignee: Roman Kulyk  (was: Jinfeng Ni)

> Drill planer should not allow select * with group by clause
> ---
>
> Key: DRILL-5768
> URL: https://issues.apache.org/jira/browse/DRILL-5768
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Jinfeng Ni
>Assignee: Roman Kulyk
>
> The following query should not be allowed in Drill planner.
> {code}
> select * from cp.`tpch/nation.parquet` group by n_regionkey;
> ++
> | *  |
> ++
> | 0  |
> | 1  |
> | 4  |
> | 3  |
> | 2  |
> ++
> {code}
> However, Drill allow such query to run and even worse the result is 
> incorrect.  It would make sense that we block such type of query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5768) Drill planer should not allow select * with group by clause

2017-10-30 Thread Arina Ielchiieva (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224886#comment-16224886
 ] 

Arina Ielchiieva commented on DRILL-5768:
-

Issue to ban star in queries with group by was resolved in Calcite 1.5 
(https://issues.apache.org/jira/browse/CALCITE-546).

The root cause is that 
[AggChecker|https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/sql/validate/AggChecker.java#L93]
 does not do full validation and exists, too early.
The key change was made in how star in represented  in 
[SqlIdentifier|https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/sql/SqlIdentifier.java#L359].

I suggest we wait for Calcite upgrade.

> Drill planer should not allow select * with group by clause
> ---
>
> Key: DRILL-5768
> URL: https://issues.apache.org/jira/browse/DRILL-5768
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Jinfeng Ni
>Assignee: Jinfeng Ni
>
> The following query should not be allowed in Drill planner.
> {code}
> select * from cp.`tpch/nation.parquet` group by n_regionkey;
> ++
> | *  |
> ++
> | 0  |
> | 1  |
> | 4  |
> | 3  |
> | 2  |
> ++
> {code}
> However, Drill allow such query to run and even worse the result is 
> incorrect.  It would make sense that we block such type of query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5910) Logging exception when custom AuthenticatorFactory not found

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224799#comment-16224799
 ] 

ASF GitHub Bot commented on DRILL-5910:
---

Github user vladimirtkach commented on the issue:

https://github.com/apache/drill/pull/1013
  
@vrozov made the changes, please review.


> Logging exception when  custom AuthenticatorFactory not found
> -
>
> Key: DRILL-5910
> URL: https://issues.apache.org/jira/browse/DRILL-5910
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>Priority: Minor
> Fix For: 1.12.0
>
>
> We need to log the exception when any of custom AuthenticatorFactory fails to 
> be instansiated in ClientAuthenticatorProvider constructor. We are doing this 
> to allow drill to use other available AuthenticatorFactory
> Steps to repoduce:
> 1) Configure plain authentication
> 2) Add 
> -Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
>  to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
> 3) Run sqlline and connect to drillbit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5910) Logging exception when custom AuthenticatorFactory not found

2017-10-30 Thread Volodymyr Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Volodymyr Tkach updated DRILL-5910:
---
Summary: Logging exception when  custom AuthenticatorFactory not found  
(was: Logging exception when  AuthenticatorFactory not found)

> Logging exception when  custom AuthenticatorFactory not found
> -
>
> Key: DRILL-5910
> URL: https://issues.apache.org/jira/browse/DRILL-5910
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>Priority: Minor
> Fix For: 1.12.0
>
>
> We need to log the exception when any of custom AuthenticatorFactory fails to 
> be instansiated in ClientAuthenticatorProvider constructor. We are doing this 
> to allow drill to use other available AuthenticatorFactory
> Steps to repoduce:
> 1) Configure plain authentication
> 2) Add 
> -Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
>  to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
> 3) Run sqlline and connect to drillbit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5910) Logging exception when AuthenticatorFactory not found

2017-10-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224571#comment-16224571
 ] 

ASF GitHub Bot commented on DRILL-5910:
---

Github user vladimirtkach commented on a diff in the pull request:

https://github.com/apache/drill/pull/1013#discussion_r147655905
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/security/ClientAuthenticatorProvider.java
 ---
@@ -57,17 +57,17 @@ private ClientAuthenticatorProvider() {
 
 // then, custom factories
 if (customFactories != null) {
-  try {
-final String[] factories = customFactories.split(",");
-for (final String factory : factories) {
+  final String[] factories = customFactories.split(",");
+  for (final String factory : factories) {
+try {
   final Class clazz = Class.forName(factory);
   if (AuthenticatorFactory.class.isAssignableFrom(clazz)) {
 final AuthenticatorFactory instance = (AuthenticatorFactory) 
clazz.newInstance();
 authFactories.put(instance.getSimpleName(), instance);
   }
+} catch (final ClassNotFoundException | IllegalAccessException | 
InstantiationException e) {
--- End diff --

a moot point, I think that three specific exceptions is better the one 
general.   


> Logging exception when  AuthenticatorFactory not found
> --
>
> Key: DRILL-5910
> URL: https://issues.apache.org/jira/browse/DRILL-5910
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>Priority: Minor
> Fix For: 1.12.0
>
>
> We need to log the exception when any of custom AuthenticatorFactory fails to 
> be instansiated in ClientAuthenticatorProvider constructor. We are doing this 
> to allow drill to use other available AuthenticatorFactory
> Steps to repoduce:
> 1) Configure plain authentication
> 2) Add 
> -Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
>  to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
> 3) Run sqlline and connect to drillbit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5910) Logging exception when AuthenticatorFactory not found

2017-10-30 Thread Volodymyr Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Volodymyr Tkach updated DRILL-5910:
---
Description: 
We need to log the exception when any of custom AuthenticatorFactory fails to 
be instansiated in ClientAuthenticatorProvider constructor. We are doing this 
to allow drill to use other available AuthenticatorFactory
Steps to repoduce:
1) Configure plain authentication
2) Add 
-Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
 to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
3) Run sqlline and connect to drillbit

  was:
We need to log the exception when any of custom *AuthenticatorFactory*s fails 
to be instansiated in ClientAuthenticatorProvider constructor. We are doing 
this to allow drill to use other available *AuthenticatorFactory*(s)
Steps to repoduce:
1) Configure plain authentication
2) Add 
-Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
 to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
3) Run sqlline and connect to drillbit


> Logging exception when  AuthenticatorFactory not found
> --
>
> Key: DRILL-5910
> URL: https://issues.apache.org/jira/browse/DRILL-5910
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>Priority: Minor
> Fix For: 1.12.0
>
>
> We need to log the exception when any of custom AuthenticatorFactory fails to 
> be instansiated in ClientAuthenticatorProvider constructor. We are doing this 
> to allow drill to use other available AuthenticatorFactory
> Steps to repoduce:
> 1) Configure plain authentication
> 2) Add 
> -Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
>  to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
> 3) Run sqlline and connect to drillbit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5910) Logging exception when AuthenticatorFactory not found

2017-10-30 Thread Volodymyr Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Volodymyr Tkach updated DRILL-5910:
---
Description: 
We need to log the exception when any of custom *AuthenticatorFactory*s fails 
to be instansiated in ClientAuthenticatorProvider constructor. We are doing 
this to allow drill to use other available *AuthenticatorFactory*(s)
Steps to repoduce:
1) Configure plain authentication
2) Add 
-Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
 to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
3) Run sqlline and connect to drillbit

  was:
We need to add factory name in exception message  when ClassNotFoundException 
is caught and DrillRuntimeException is than re-thrown in 
ClientAuthenticatorProvider constructor.
Steps to repoduce:
1) Configure plain authentication
2) Add 
-Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
 to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
3) Run sqlline and connect to drillbit

Summary: Logging exception when  AuthenticatorFactory not found  (was: 
ClassNotFoundException message enhancement )

> Logging exception when  AuthenticatorFactory not found
> --
>
> Key: DRILL-5910
> URL: https://issues.apache.org/jira/browse/DRILL-5910
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>Priority: Minor
> Fix For: 1.12.0
>
>
> We need to log the exception when any of custom *AuthenticatorFactory*s fails 
> to be instansiated in ClientAuthenticatorProvider constructor. We are doing 
> this to allow drill to use other available *AuthenticatorFactory*(s)
> Steps to repoduce:
> 1) Configure plain authentication
> 2) Add 
> -Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
>  to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
> 3) Run sqlline and connect to drillbit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5910) ClassNotFoundException message enhancement

2017-10-30 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5910:

Labels:   (was: ready-to-commit)

> ClassNotFoundException message enhancement 
> ---
>
> Key: DRILL-5910
> URL: https://issues.apache.org/jira/browse/DRILL-5910
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>Priority: Minor
> Fix For: 1.12.0
>
>
> We need to add factory name in exception message  when ClassNotFoundException 
> is caught and DrillRuntimeException is than re-thrown in 
> ClientAuthenticatorProvider constructor.
> Steps to repoduce:
> 1) Configure plain authentication
> 2) Add 
> -Ddrill.customAuthFactories=org.apache.drill.exec.rpc.security.maprsasl.MapRSaslFactory
>  to SQLLINE_JAVA_OPTS or another class that is not present in classpath.
> 3) Run sqlline and connect to drillbit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)