[GitHub] drill pull request #996: DRILL-5878: TableNotFound exception is being report...

2017-10-30 Thread HanumathRao
Github user HanumathRao commented on a diff in the pull request:

https://github.com/apache/drill/pull/996#discussion_r147896927
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/store/dfs/TestFileSelection.java
 ---
@@ -63,4 +63,17 @@ public void testEmptyFolderThrowsTableNotFound() throws 
Exception {
 }
   }
 
--- End diff --

done


---


[jira] [Created] (DRILL-5915) Streaming aggregate with limit query does not return

2017-10-30 Thread Boaz Ben-Zvi (JIRA)
Boaz Ben-Zvi created DRILL-5915:
---

 Summary: Streaming aggregate with limit query does not return
 Key: DRILL-5915
 URL: https://issues.apache.org/jira/browse/DRILL-5915
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Relational Operators
Affects Versions: 1.11.0
Reporter: Boaz Ben-Zvi


Reading a 1M rows table, in embedded mode, using sort+streaming_aggr -- the 
work is complete, but the query does not return (see attached profile)

{code}
alter session set `planner.enable_hashagg` = false;

select b.g, b.s from (select gby_int32, gby_date g, gby_int32_rand, 
sum(int32_field) s from dfs.`/data/PARQUET-1M.parquet` group by gby_int32, 
gby_date, gby_int32_rand) b limit 30;
{code}





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Drill SASL Forward Compatibility

2017-10-30 Thread Laurent Goujon
Regarding DRILL-5582, I see that fix as a breakage of the work to maintain
compatibility for an newer client to connect to a older version of the
server. Or put it differently: current (master) client does not connect
anymore to a server not supporting SASL (<=1.9). Note that the client could
detect if the server supports SASL as it is advertised in the
supported_methods field of the BitToUserHandshake, and it would restore
compatibility, but it seems the fix was done in response to a potential
security flaw (although I have to admin not sure what issue it does prevent
since it is still possible for a MITM to intercept all traffic between a
client and a server).

Laurent

On Mon, Oct 30, 2017 at 5:18 PM, Sorabh Hamirwasia 
wrote:

> Hi All,
>
> We recently added a check (as part of DRILL-5582 apache.org/jira/browse/DRILL-5582>) on DrillClient side to enforce that
> if client showed intent for authentication and Drillbit say's it doesn't
> require authentication then connection will fail with proper error message.
>
>
> With this change we found a hidden issue w.r.t forward compatibility of
> Drill. New clients running on 1.11+ authenticating to older Drillbit
> running on 1.10 are treated as client running without any SASL support or
> (<=1.9 version). The root cause for this issue is as follows:
>
>
> In 1.10 a new field SASL_SUPPORT was introduced in Handshake message
> between DrillCilent and Drillbit. The supported values for that field are:
>
>
> Drill 1.10: [1]
>
>
> enum SASL_SUPPORT {
> UNKNOWN_SASL_SUPPORT = 0;
> SASL_AUTH = 1;
> }
>
>
> Drill 1.11/1.12: [2]
>
>
> enum SASL_SUPPORT {
> UNKNOWN_SASL_SUPPORT = 0;
> SASL_AUTH = 1;
> SASL_PRIVACY = 2;
> }
>
> A 1.11 client always has SASL_PRIVACY set in handshake. A 1.10 Drillbit
> getting the message interprets the value as UNKNOWN_SASL_SUPPORT [3]. In
> 1.10 Drillbit treats that value as an indication of older client < 1.10
> [4], and it will try to authenticate using the 1.9 mechanism and send the
> SUCCESS/FAILURE state in Handshake Response. The 1.12 DrillClient will fail
> the connection as it will expect AUTH_REQUIRED from Drillbit instead of
> SUCCESS/FAILURE.
>
>
> The fix for this issue can be to use only absence of field as indication
> of client < 1.10 and if the field is present but it evaluates to
> UNKNOWN_SASL_SUPPORT value then Drillbit should consider corresponding
> client to be of future version at least for authentication purpose and
> speak SASL protocol.
>
> We have to either back port above forward compatibility fix into 1.10 and
> 1.11 or just fix in current release and support forward compatibility post
> 1.12+.
>
>
> Arina/Sudheesh - Please suggest if the analysis and fix sounds good and
> what are the releases we should consider the fix for. Considering 1.12
> release is planned soon can we take the fix in 1.12 release ?
>
>
>
> [1]: https://github.com/apache/drill/blob/1.10.0/protocol/
> src/main/protobuf/User.proto#L70
>
> [2]: https://github.com/apache/drill/blob/1.11.0/protocol/
> src/main/protobuf/User.proto#L70
>
> [3]: http://androiddevblog.com/protocol-buffers-pitfall-
> adding-enum-values/
>
> [4]: https://github.com/apache/drill/blob/1.10.0/exec/java-
> exec/src/main/java/org/apache/drill/exec/rpc/user/UserServer.java#L297
>
>
> Thanks,
> Sorabh
>


Drill SASL Forward Compatibility

2017-10-30 Thread Sorabh Hamirwasia
Hi All,

We recently added a check (as part of 
DRILL-5582) on DrillClient 
side to enforce that if client showed intent for authentication and Drillbit 
say's it doesn't require authentication then connection will fail with proper 
error message.


With this change we found a hidden issue w.r.t forward compatibility of Drill. 
New clients running on 1.11+ authenticating to older Drillbit running on 1.10 
are treated as client running without any SASL support or (<=1.9 version). The 
root cause for this issue is as follows:


In 1.10 a new field SASL_SUPPORT was introduced in Handshake message between 
DrillCilent and Drillbit. The supported values for that field are:


Drill 1.10: [1]


enum SASL_SUPPORT {
UNKNOWN_SASL_SUPPORT = 0;
SASL_AUTH = 1;
}


Drill 1.11/1.12: [2]


enum SASL_SUPPORT {
UNKNOWN_SASL_SUPPORT = 0;
SASL_AUTH = 1;
SASL_PRIVACY = 2;
}

A 1.11 client always has SASL_PRIVACY set in handshake. A 1.10 Drillbit getting 
the message interprets the value as UNKNOWN_SASL_SUPPORT [3]. In 1.10 Drillbit 
treats that value as an indication of older client < 1.10 [4], and it will try 
to authenticate using the 1.9 mechanism and send the SUCCESS/FAILURE state in 
Handshake Response. The 1.12 DrillClient will fail the connection as it will 
expect AUTH_REQUIRED from Drillbit instead of SUCCESS/FAILURE.


The fix for this issue can be to use only absence of field as indication of 
client < 1.10 and if the field is present but it evaluates to 
UNKNOWN_SASL_SUPPORT value then Drillbit should consider corresponding client 
to be of future version at least for authentication purpose and speak SASL 
protocol.

We have to either back port above forward compatibility fix into 1.10 and 1.11 
or just fix in current release and support forward compatibility post 1.12+.


Arina/Sudheesh - Please suggest if the analysis and fix sounds good and what 
are the releases we should consider the fix for. Considering 1.12 release is 
planned soon can we take the fix in 1.12 release ?



[1]: 
https://github.com/apache/drill/blob/1.10.0/protocol/src/main/protobuf/User.proto#L70

[2]: 
https://github.com/apache/drill/blob/1.11.0/protocol/src/main/protobuf/User.proto#L70

[3]: http://androiddevblog.com/protocol-buffers-pitfall-adding-enum-values/

[4]: 
https://github.com/apache/drill/blob/1.10.0/exec/java-exec/src/main/java/org/apache/drill/exec/rpc/user/UserServer.java#L297


Thanks,
Sorabh


[jira] [Created] (DRILL-5914) CSV (text) reader fails to parse quoted newlines in trailing fields

2017-10-30 Thread Paul Rogers (JIRA)
Paul Rogers created DRILL-5914:
--

 Summary: CSV (text) reader fails to parse quoted newlines in 
trailing fields
 Key: DRILL-5914
 URL: https://issues.apache.org/jira/browse/DRILL-5914
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.11.0
Reporter: Paul Rogers
Assignee: Paul Rogers


Consider the existing `TestCsvHeader.testCountOnCsvWithHeader()` unit test. The 
input file is as follows:

```
Year,Make,Model,Description,Price
1997,Ford,E350,"ac, abs, moon",3000.00
1999,Chevy,"Venture ""Extended Edition""","",4900.00
1999,Chevy,"Venture ""Extended Edition, Very Large""",,5000.00
1996,Jeep,Grand Cherokee,"MUST SELL!
air, moon roof, loaded",4799.00
```

Note the newline in side the description in the last record.

If we do a `SELECT *` query, the file is parsed fine; we get 4 records.

If we do a `SELECT Year, Model` query, the CSV reader uses a special trick: it 
short-circuits reads on the three columns that are not wanted:

```
TextReader.parseRecord() {
...
if (earlyTerm) {
  if (ch != newLine) {
input.skipLines(1); // <-- skip lines
  }
  break;
}
```

This method skips forward in the file, discarding characters until it hits a 
newline:

```
  do {
nextChar();
  } while (lineCount < expectedLineCount);
```

Note that this code handles individual characters, it is not aware of per-field 
semantics. That is, unlike the higher-level parser methods, the `nextChar()` 
method does not consider newlines inside of quoted fields to be special.

This problem shows up acutely in a `SELECT COUNT(*)` style query that skips all 
fields; the result is we count the input as five lines, not four.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] drill issue #976: DRILL-5797: Choose parquet reader from read columns

2017-10-30 Thread paul-rogers
Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/976
  
@dprofeta, tried to commit this PR, but ran into multiple functional test 
failures:

```
Execution Failures:

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex12.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex8.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex56.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex274.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex7.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex57.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex102.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex5.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex10.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex9.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex203.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex101.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex275.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex6.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex205.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex11.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex58.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex153.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex202.q

/root/drillAutomation/mapr/framework/resources/Functional/complex/parquet/complex151.q
```

The common failure stack trace seems to be:

```

org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.handleException():272

org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.setup():256
org.apache.drill.exec.physical.impl.ScanBatch.getNextReaderIfHas():241
org.apache.drill.exec.physical.impl.ScanBatch.next():167
...
``` 


---


[GitHub] drill pull request #978: DRILL-5842: Refactor and simplify the fragment, ope...

2017-10-30 Thread paul-rogers
Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/978#discussion_r147842797
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/physical/unit/MiniPlanUnitTestBase.java
 ---
@@ -360,7 +366,7 @@ public T columnsToRead(String ... columnsToRead) {
*/
   public class JsonScanBuilder extends ScanPopBuider {
 List jsonBatches = null;
-List inputPaths = Collections.EMPTY_LIST;
+List inputPaths = Collections.emptyList();
--- End diff --

This is a subtle point. Using the constant creates the expression:

```
public static final List EMPTY_LIST = new EmptyList<>(); // Definition
List inputPaths = EMPTY_LIST; // Original code
```

The above is not type-friendly: we are setting a typed list (`inputPaths`) 
to an untyped constant (`EMPTY_LIST`).

The revised code uses Java's parameterized methods to work around the type 
ambiguity:

```
public static final  List emptyList() ... // Definition
List inputPaths = Collections.emptyList(); // Type-safe assignment
```

Functionally, the two expressions are identical. But, the original was 
type-unsafe and generated a compiler warning. The new one is type-safe and 
resolves the warning.


---


[GitHub] drill issue #978: DRILL-5842: Refactor and simplify the fragment, operator c...

2017-10-30 Thread paul-rogers
Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/978
  
@jinfengni, sorry my description was a bit ambiguous. This PR has two goals:

1. Clean up the contexts based on what has been learned recently.
2. Evolve the contexts to allow "sub-operator" unit testing without mocks.

Drill has many tests that use mocks and these are unaffected by the change. 
The PR simply allows new tests to do sub-operator testing without mocks, when 
convenient. 


---


Re: Implicit columns and maps

2017-10-30 Thread Paul Rogers
Hi Jinfeng,

Thanks much for your thoughts. To follow up on two topics.

First, how would you suggest we handle the following:

SELECT user, filename, `filename` FROM …

Here, the project list handed to the scan operator will have three columns, two 
with name “filename”. I suppose we’d need to modify SchemaPath to indicate if a 
column is intended to be a metadata (implicit) column or a regular (table) 
column. In that way, the name (TABLE, “filename”) would not collide with 
(METADATA, “filename”).

Second, on the directory columns, we wouldn’t want to break existing queries. 
So, the thought is that the user can decide whether to use `dir` (all 
partitions in an array) or `dir0`, `dir1` and so on.

The issue with the specific `dirn` columns is the problem that occurs if scan A 
sees “/foo/first.csv” while Scan B sees “/foo/bar/second.csv.” The first scan 
creates only dir0, the second creates dir0 and dir1. The result is a schema 
change in, say, the sort. (The point here is two distinct scans; we have a 
solution if these two files are two readers within a single scan.)

By using an array, then the schema is identical between the two scans, only the 
data differs.

I suspect that the “dir0”, “dir1” columns originated in a non-distributed 
system where the above ambiguity does not occur.

Or, is there a better way to solve the problem? Special case the “dir” columns 
in each operator that might otherwise have to deal with a schema change?

Thanks,

- Paul

> On Oct 27, 2017, at 10:56 PM, Jinfeng Ni  wrote:
> 
> I think it would make sense to treat the implicit columns as reserved
> words, and if user wants to use those names as regular column, or table
> name, or any objects in the system, they should use quoted identifier.
> 
> select suffix from t1; // implicit columns takes precedence
> 
> select `suffix` from t1; // regular columns takes precedence.
> 
> That's what database would typically handles the conflicts between
> keywords/reserved words vs identifier used for regular objects.
> 
> Regarding the change from dir0, dir1 to dir[0], dir[1], personally I do not
> think it's a good idea, considering the impact on the application side (it
> will break any application that built on Drill using those columns).
> However, I agree that we should not add implicit columns in the record
> reader, then remove them later, as reported in DRILL-5542 [1].
> 
> 1. https://issues.apache.org/jira/browse/DRILL-5542
> 
> 
> On Mon, Oct 9, 2017 at 2:38 PM, Paul Rogers  wrote:
> 
>> As you point out, naming is a separate issue. I believe we inherited names
>> from some other system. But, it is an issue that we use “good” names for
>> implicit columns. If we add more names “createDate”, “modifcationDate”,
>> “owner”, or whatever), we end up breaking someone’s queries that has
>> columns with those names.
>> 
>> Would be good to have a prefix, as you suggest ,to separate Drill names
>> from user names. As it turns out, Drill supports maps (AKA structs) and
>> arrays, so perhaps we could have:
>> 
>> $meta$
>> |- filename
>> |- fqn
>> |- …
>> |- dir[]
>> 
>> Where “dir” is an array, rather than the current separate scalar dir0,
>> dir1, etc.
>> 
>> The above map let’s us add any number of metadata columns without
>> potentially breaking existing queries.
>> 
>> Note that we already have a problem where we can hit a hard schema change
>> because one reader sees “/a/b/c/foo.csv” while another sees “a/b/bar.csv”,
>> resulting in different numbers of “dirx” columns from the two readers.
>> 
>> Yet another issue is that, in wildcard queries (e.g. “SELECT *”), we add
>> all implicit columns, then remove them later. We should optimize this case.
>> 
>> But, even if we keep the original names, and defer the other issues, the
>> question about map semantics still stands…
>> 
>> - Paul
>> 
>>> On Oct 9, 2017, at 12:06 PM, Boaz Ben-Zvi  wrote:
>>> 
>>>   How about changing all those “implicit” columns to have some
>> “unconventional” prefix, like an underscore (or two _ _ ); e.g. _suffix,
>> _dir0, etc .
>>> 
>>> With such a change we may need to handle the transition of existing
>> users’ code ; e.g., maybe change the priority (mentioned below) so that an
>> existing “suffix” column takes precedence over the implicit one.
>>> Or just go “cold turkey” and force the users to change.
>>> 
>>>Just an idea,
>>> 
>>>   Boaz
>>> 
>>> On 10/9/17, 10:45 AM, "Paul Rogers"  wrote:
>>> 
>>>   Hi All,
>>> 
>>>   Drill provides a set of “implicit” columns to describe files:
>> filename, suffix, fan and filepath. Drill also provides an open-ended set
>> of partition columns: dir0, dir1, dir2, etc.
>>> 
>>>   Not all readers support the above: some do and some don’t.
>>> 
>>>   Drill semantics seem to treat these as semi-reserved words when a
>> reader supports implicit columns. If a table has a “suffix” column, then
>> Drill will treat “suffix” as an implicit 

[GitHub] drill pull request #1009: DRILL-5905: Exclude jdk-tools from project depende...

2017-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1009


---


[GitHub] drill pull request #1013: DRILL-5910: Logging exception when custom Authenti...

2017-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1013


---


[GitHub] drill pull request #1006: DRILL-5895: Add logging mongod exception when fail...

2017-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1006


---


[GitHub] drill pull request #1012: DRILL-5911: Upgrade esri-geometry-api version to 2...

2017-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1012


---


[GitHub] drill pull request #1010: DRILL-5906: java.lang.NullPointerException while q...

2017-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1010


---


Re: [HANGOUT] Topics for 10/31/2017

2017-10-30 Thread Timothy Farkas
I'll speak about unit testing:


 - Common mistakes that were made in unit tests

 - The soon to be merged temp directory test watcher classes

 - Making our Travis build run smoke tests with code coverage

 - Other misc improvements to be made

Thanks,
Tim


From: Gautam Parai 
Sent: Monday, October 30, 2017 9:26:34 AM
To: dev@drill.apache.org; u...@drill.apache.org
Subject: [HANGOUT] Topics for 10/31/2017

Hi,

We will have a Drill hangout tomorrow (Tuesday Oct 31) at 10 AM Pacific Time. 
Please suggest topics by replying to this thread or bring them up during the 
hangout.

Hangout link:  
https://plus.google.com/hangouts/_/event/ci4rdiju8bv04a64efj5fedd0lc

Thanks,
Gautam


[GitHub] drill issue #996: DRILL-5878: TableNotFound exception is being reported for ...

2017-10-30 Thread HanumathRao
Github user HanumathRao commented on the issue:

https://github.com/apache/drill/pull/996
  
@arina-ielchiieva Thank you for the review comments. I have modified the 
code accordingly. Please let me know if anything needs to be changed.


---


[GitHub] drill pull request #996: DRILL-5878: TableNotFound exception is being report...

2017-10-30 Thread HanumathRao
Github user HanumathRao commented on a diff in the pull request:

https://github.com/apache/drill/pull/996#discussion_r147818548
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/SchemaUtilites.java
 ---
@@ -77,6 +77,22 @@ public static SchemaPlus findSchema(final SchemaPlus 
defaultSchema, final String
 return findSchema(defaultSchema, schemaPathAsList);
   }
 
+  /**
+   * Utility function to get the commonPrefix schema between two supplied 
schemas.
--- End diff --

done.


---


[GitHub] drill pull request #936: DRILL-5772: Enable UTF-8 support in query string by...

2017-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/936


---


[GitHub] drill pull request #998: DRILL-5887: Display process user/groups info in Dri...

2017-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/998


---


[GitHub] drill pull request #993: DRILL-5874: NPE in AnonWebUserConnection.cleanupSes...

2017-10-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/993


---


[jira] [Resolved] (DRILL-1499) Different column order could appear in the result set for a schema-less select * query, even there are no changing schemas.

2017-10-30 Thread Vitalii Diravka (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitalii Diravka resolved DRILL-1499.

   Resolution: Resolved
 Assignee: Vitalii Diravka  (was: Steven Phillips)
Fix Version/s: (was: Future)
   1.12.0

There is no need to canonicalize the batch or container since RecordBatchLoader 
swallows the Schema Change for now if two batches has different column ordering.
Resolved in context of DRILL-5845

> Different column order could appear in the result set for a schema-less 
> select * query, even there are no changing schemas.
> ---
>
> Key: DRILL-1499
> URL: https://issues.apache.org/jira/browse/DRILL-1499
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Jinfeng Ni
>Assignee: Vitalii Diravka
> Fix For: 1.12.0
>
>
> For a select * query referring to a schema-less table, Drill could return 
> different column, depending on the physical operators the query involves:
> Q1:
> {code}
> select * from cp.`employee.json` limit 3;
> +-++++-+++---++++---+-+++-+
> | employee_id | full_name  | first_name | last_name  | position_id | 
> position_title |  store_id  | department_id | birth_date | hire_date  |   
> salary   | supervisor_id | education_level | marital_status |   gender   | 
> management_role |
> +-++++-+++---++++---+-+++-+
> {code}
> Q2:
> {code}
> select * from cp.`employee.json` order by last_name limit 3;
> ++---+-+-++++++-++-++++---+
> | birth_date | department_id | education_level | employee_id | first_name | 
> full_name  |   gender   | hire_date  | last_name  | management_role | 
> marital_status | position_id | position_title |   salary   |  store_id  | 
> supervisor_id |
> ++---+-+-++++++-++-++++---+
> {code}
> The difference between Q1 and Q2 is the order by clause.  With order by 
> clause in Q2, Drill will sort the column names alphabetically, while for Q1, 
> the column names are in the same order as in the data source. 
> The underlying cause for such difference is that the sort or sort-based 
> merger operator would require canonicalization, since the incoming batches 
> could contain different schemas. 
>  However, it would be better that such canonicalization is used only when the 
> incoming batches have changing schemas. If all the incoming batches have 
> identical schemas, no need to sort the column orders.  With this fix, Drill 
> will present the same column order in the result set, for a schema-less 
> select * query,  if there is no changing schemas from incoming data sources. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [HANGOUT] Topics for 10/31/2017

2017-10-30 Thread Vlad Rozov
I'd suggest discussing migration to gitbox https://gitbox.apache.org/. 
For more details please see https://git.apache.org/.


Thank you,

Vlad

On 10/30/17 09:26, Gautam Parai wrote:

Hi,

We will have a Drill hangout tomorrow (Tuesday Oct 31) at 10 AM Pacific Time. 
Please suggest topics by replying to this thread or bring them up during the 
hangout.

Hangout link:  
https://plus.google.com/hangouts/_/event/ci4rdiju8bv04a64efj5fedd0lc

Thanks,
Gautam





Re: [GitHub] drill issue #971: Drill-5834 Add Networking Functions

2017-10-30 Thread Charles Givre
I *think* I just did this correctly, but let me know if I didn’t.
—C


> On Oct 30, 2017, at 12:54, paul-rogers  wrote:
> 
> Github user paul-rogers commented on the issue:
> 
>https://github.com/apache/drill/pull/971
> 
>Please squash commits.
> 
> 
> ---



[GitHub] drill issue #1005: DRILL-5896: Handle HBase columns vector creation in the H...

2017-10-30 Thread prasadns14
Github user prasadns14 commented on the issue:

https://github.com/apache/drill/pull/1005
  
@paul-rogers please review the changes


---


RE: [HANGOUT] Topics for 10/31/2017

2017-10-30 Thread Matthew Mucker
I'd like a talk on 'how to get started debugging Drill.'

Without a developer's guide or explicit instructions for getting a debug
environment up and running, it's hard for a newcomer to get involved. 

-Original Message-
From: Gautam Parai [mailto:gpa...@mapr.com] 
Sent: Monday, October 30, 2017 11:27 AM
To: dev@drill.apache.org; u...@drill.apache.org
Subject: [HANGOUT] Topics for 10/31/2017

Hi,

We will have a Drill hangout tomorrow (Tuesday Oct 31) at 10 AM Pacific
Time. Please suggest topics by replying to this thread or bring them up
during the hangout.

Hangout link:
https://plus.google.com/hangouts/_/event/ci4rdiju8bv04a64efj5fedd0lc

Thanks,
Gautam



[GitHub] drill issue #971: Drill-5834 Add Networking Functions

2017-10-30 Thread paul-rogers
Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/971
  
Please squash commits.


---


[HANGOUT] Topics for 10/31/2017

2017-10-30 Thread Gautam Parai
Hi,

We will have a Drill hangout tomorrow (Tuesday Oct 31) at 10 AM Pacific Time. 
Please suggest topics by replying to this thread or bring them up during the 
hangout.

Hangout link:  
https://plus.google.com/hangouts/_/event/ci4rdiju8bv04a64efj5fedd0lc

Thanks,
Gautam


[GitHub] drill issue #1013: DRILL-5910: Logging exception when custom AuthenticatorFa...

2017-10-30 Thread vrozov
Github user vrozov commented on the issue:

https://github.com/apache/drill/pull/1013
  
LGTM


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134712
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBRecordReader.java
 ---
@@ -0,0 +1,263 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.ValidationError;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.TypeProtos.MajorType;
+import org.apache.drill.common.types.TypeProtos.MinorType;
+import org.apache.drill.common.types.Types;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.expr.TypeHelper;
+import org.apache.drill.exec.ops.OperatorContext;
+import org.apache.drill.exec.physical.impl.OutputMutator;
+import org.apache.drill.exec.record.MaterializedField;
+import org.apache.drill.exec.store.AbstractRecordReader;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.apache.drill.exec.vector.NullableFloat8Vector;
+import org.apache.drill.exec.vector.NullableTimeStampVector;
+import org.apache.drill.exec.vector.NullableVarCharVector;
+import org.apache.drill.exec.vector.ValueVector;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+public class OpenTSDBRecordReader extends AbstractRecordReader {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBRecordReader.class);
+
+  private static final Map TYPES;
+
+  private Service db;
+
+  private Iterator tableIterator;
+  private OutputMutator output;
+  private ImmutableList projectedCols;
+  private OpenTSDBSubScan.OpenTSDBSubScanSpec subScanSpec;
+
+  OpenTSDBRecordReader(Service client, OpenTSDBSubScan.OpenTSDBSubScanSpec 
subScanSpec,
+   List projectedColumns) throws 
IOException {
+setColumns(projectedColumns);
+this.db = client;
+this.subScanSpec = subScanSpec;
+db.setupQueryParameters(subScanSpec.getTableName());
+log.debug("Scan spec: {}", subScanSpec);
+  }
+
+  @Override
+  public void setup(OperatorContext context, OutputMutator output) throws 
ExecutionSetupException {
+this.output = output;
+Set tables = db.getTablesFromDB();
+if (tables == null || tables.isEmpty()) {
+  throw new ValidationError(String.format("Table '%s' not found or 
it's empty", subScanSpec.getTableName()));
+}
+this.tableIterator = tables.iterator();
+  }
+
+  @Override
+  public int next() {
+try {
+  return processOpenTSDBTablesData();
+} catch (SchemaChangeException e) {
+  log.info(e.toString());
+  return 0;
+}
+  }
+
+  @Override
+  protected boolean isSkipQuery() {
+return super.isSkipQuery();
+  }
+
+  @Override
+  public void close() throws Exception {
+  }
+
+  static {
+TYPES = ImmutableMap.builder()
+.put(OpenTSDBTypes.STRING, MinorType.VARCHAR)
+.put(OpenTSDBTypes.DOUBLE, 

[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134172
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBRecordReader.java
 ---
@@ -0,0 +1,263 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.ValidationError;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.TypeProtos.MajorType;
+import org.apache.drill.common.types.TypeProtos.MinorType;
+import org.apache.drill.common.types.Types;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.expr.TypeHelper;
+import org.apache.drill.exec.ops.OperatorContext;
+import org.apache.drill.exec.physical.impl.OutputMutator;
+import org.apache.drill.exec.record.MaterializedField;
+import org.apache.drill.exec.store.AbstractRecordReader;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.apache.drill.exec.vector.NullableFloat8Vector;
+import org.apache.drill.exec.vector.NullableTimeStampVector;
+import org.apache.drill.exec.vector.NullableVarCharVector;
+import org.apache.drill.exec.vector.ValueVector;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+public class OpenTSDBRecordReader extends AbstractRecordReader {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBRecordReader.class);
+
+  private static final Map TYPES;
+
+  private Service db;
+
+  private Iterator tableIterator;
+  private OutputMutator output;
+  private ImmutableList projectedCols;
+  private OpenTSDBSubScan.OpenTSDBSubScanSpec subScanSpec;
+
+  OpenTSDBRecordReader(Service client, OpenTSDBSubScan.OpenTSDBSubScanSpec 
subScanSpec,
+   List projectedColumns) throws 
IOException {
+setColumns(projectedColumns);
+this.db = client;
+this.subScanSpec = subScanSpec;
+db.setupQueryParameters(subScanSpec.getTableName());
+log.debug("Scan spec: {}", subScanSpec);
+  }
+
+  @Override
+  public void setup(OperatorContext context, OutputMutator output) throws 
ExecutionSetupException {
+this.output = output;
+Set tables = db.getTablesFromDB();
+if (tables == null || tables.isEmpty()) {
+  throw new ValidationError(String.format("Table '%s' not found or 
it's empty", subScanSpec.getTableName()));
--- End diff --

1. If we didn't find table, it's fine if we fail but why do we return error 
if table is empty? Should not we just return empty result? What 
`db.getTablesFromDB();` actually returns? All tables from the db?
2. I have incorrectly indicated connection to the opentsdb. And when I have 
run the query `SELECT * FROM openTSDB.(metric=mymetric.stock)` and got 
validation error, though the real root cause was that we could not connect to 
the opentsdb. In `drillbit.out` the following error was logged: 
```
java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 

[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583562
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/query/Query.java
 ---
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.query;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import static 
org.apache.drill.exec.store.openTSDB.Constants.SUM_AGGREGATOR;
+
+/**
+ * Query is an abstraction of openTSDB subQuery
+ * and it is integral part of DBQuery
+ * 
+ * Each sub query can retrieve individual or groups of timeseries data,
+ * performing aggregation on each set.
+ */
+public class Query {
+
+  /**
+   * The name of an aggregation function to use.
+   */
+  private String aggregator;
+  /**
+   * The name of a metric stored in the system
+   */
+  private String metric;
+  /**
+   * Whether or not the data should be converted into deltas before 
returning.
+   * This is useful if the metric is a continuously incrementing counter
+   * and you want to view the rate of change between data points.
+   */
+  private String rate;
+  /**
+   * An optional downsampling function to reduce the amount of data 
returned.
+   */
+  private String downsample;
+  /**
+   * To drill down to specific timeseries or group results by tag,
+   * supply one or more map values in the same format as the query string.
+   */
+  private Map tags;
+
+  private Query(Builder builder) {
+this.aggregator = builder.aggregator;
+this.metric = builder.metric;
+this.rate = builder.rate;
+this.downsample = builder.downsample;
+this.tags = builder.tags;
+  }
+
+  public String getAggregator() {
+return aggregator;
+  }
+
+  public String getMetric() {
+return metric;
+  }
+
+  public String getRate() {
+return rate;
+  }
+
+  public String getDownsample() {
+return downsample;
+  }
+
+  public Map getTags() {
+return tags;
+  }
+
+  public static class Builder {
+
+private String aggregator = SUM_AGGREGATOR;
--- End diff --

Aggregation is required parameter [1], should fail and ask user to provide 
one or even suggest list of available aggragators rather then using made up 
default. Taking into account OpenTSDB specifics it's rather confusing allowing 
the following queries:
```
SELECT * FROM openTSDB.`warp.speed.test`
```

[1] http://opentsdb.net/docs/build/html/api_http/query/index.html


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147582647
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBRecordReader.java
 ---
@@ -0,0 +1,263 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.ValidationError;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.TypeProtos.MajorType;
+import org.apache.drill.common.types.TypeProtos.MinorType;
+import org.apache.drill.common.types.Types;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.expr.TypeHelper;
+import org.apache.drill.exec.ops.OperatorContext;
+import org.apache.drill.exec.physical.impl.OutputMutator;
+import org.apache.drill.exec.record.MaterializedField;
+import org.apache.drill.exec.store.AbstractRecordReader;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.apache.drill.exec.vector.NullableFloat8Vector;
+import org.apache.drill.exec.vector.NullableTimeStampVector;
+import org.apache.drill.exec.vector.NullableVarCharVector;
+import org.apache.drill.exec.vector.ValueVector;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+public class OpenTSDBRecordReader extends AbstractRecordReader {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBRecordReader.class);
+
+  private static final Map TYPES;
+
+  private Service db;
+
+  private Iterator tableIterator;
+  private OutputMutator output;
+  private ImmutableList projectedCols;
+  private OpenTSDBSubScan.OpenTSDBSubScanSpec subScanSpec;
+
+  OpenTSDBRecordReader(Service client, OpenTSDBSubScan.OpenTSDBSubScanSpec 
subScanSpec,
--- End diff --

public?


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147585085
  
--- Diff: 
contrib/storage-opentsdb/src/test/java/org/apache/drill/store/openTSDB/TestDataHolder.java
 ---
@@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.store.openTSDB;
+
+class TestDataHolder {
--- End diff --

public


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147584976
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/schema/OpenTSDBSchemaFactory.java
 ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.schema;
+
+import com.google.common.collect.Maps;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.drill.exec.planner.logical.CreateTableEntry;
+import org.apache.drill.exec.planner.logical.DrillTable;
+import org.apache.drill.exec.planner.logical.DynamicDrillTable;
+import org.apache.drill.exec.store.AbstractSchema;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.SchemaFactory;
+import org.apache.drill.exec.store.openTSDB.DrillOpenTSDBTable;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePlugin;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePluginConfig;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Util.getValidTableName;
+
+public class OpenTSDBSchemaFactory implements SchemaFactory {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBSchemaFactory.class);
+
+  private final String schemaName;
+  private OpenTSDBStoragePlugin plugin;
+
+  public OpenTSDBSchemaFactory(OpenTSDBStoragePlugin plugin, String 
schemaName) {
+this.plugin = plugin;
+this.schemaName = schemaName;
+  }
+
+  @Override
+  public void registerSchemas(SchemaConfig schemaConfig, SchemaPlus 
parent) throws IOException {
+OpenTSDBTables schema = new OpenTSDBTables(schemaName);
+parent.add(schemaName, schema);
+  }
+
+  class OpenTSDBTables extends AbstractSchema {
+private final Map schemaMap = 
Maps.newHashMap();
+
+OpenTSDBTables(String name) {
+  super(Collections.emptyList(), name);
+}
+
+@Override
+public AbstractSchema getSubSchema(String name) {
+  Set tables;
--- End diff --

I don' think OpenTSDB has subschemas. For example, `dfs.tmp.table`, tmp is 
subschema here. In OpenTSDB we refer to metric as to table but not as 
subschema. This method should return empty collection, he same as 
`getSubSchemaNames()`.


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147584562
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/schema/OpenTSDBSchemaFactory.java
 ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.schema;
+
+import com.google.common.collect.Maps;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.drill.exec.planner.logical.CreateTableEntry;
+import org.apache.drill.exec.planner.logical.DrillTable;
+import org.apache.drill.exec.planner.logical.DynamicDrillTable;
+import org.apache.drill.exec.store.AbstractSchema;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.SchemaFactory;
+import org.apache.drill.exec.store.openTSDB.DrillOpenTSDBTable;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePlugin;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePluginConfig;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Util.getValidTableName;
+
+public class OpenTSDBSchemaFactory implements SchemaFactory {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBSchemaFactory.class);
+
+  private final String schemaName;
+  private OpenTSDBStoragePlugin plugin;
+
+  public OpenTSDBSchemaFactory(OpenTSDBStoragePlugin plugin, String 
schemaName) {
+this.plugin = plugin;
+this.schemaName = schemaName;
+  }
+
+  @Override
+  public void registerSchemas(SchemaConfig schemaConfig, SchemaPlus 
parent) throws IOException {
+OpenTSDBTables schema = new OpenTSDBTables(schemaName);
+parent.add(schemaName, schema);
+  }
+
+  class OpenTSDBTables extends AbstractSchema {
+private final Map schemaMap = 
Maps.newHashMap();
+
+OpenTSDBTables(String name) {
+  super(Collections.emptyList(), name);
+}
+
+@Override
+public AbstractSchema getSubSchema(String name) {
+  Set tables;
+  if (!schemaMap.containsKey(name)) {
+tables = plugin.getClient().getAllTableNames();
+schemaMap.put(name, new OpenTSDBDatabaseSchema(tables, this, 
name));
+  }
+  return schemaMap.get(name);
+}
+
+@Override
+public Set getSubSchemaNames() {
+  return Collections.emptySet();
+}
+
+@Override
+public Table getTable(String name) {
+  OpenTSDBScanSpec scanSpec = new OpenTSDBScanSpec(name);
+  name = getValidTableName(name);
+  try {
+return new DrillOpenTSDBTable(schemaName, plugin, new 
Schema(plugin.getClient(), name), scanSpec);
+  } catch (Exception e) {
+log.warn("Failure while retrieving openTSDB table {}", name, e);
+return null;
+  }
+}
+
+@Override
+public Set getTableNames() {
+  return plugin.getClient().getAllTableNames();
+}
+
+@Override
+public CreateTableEntry createNewTable(final String tableName, 
List partitionColumns) {
+  return null;
--- End diff --

Parent behavior is to fail indication that table creation is not allowed. I 
guess we can stick to it rather then returning null.


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583241
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/Util.java
 ---
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.base.Splitter;
+
+import java.util.Map;
+
+public class Util {
+
+  /**
+   * Parse FROM parameters to Map representation
+   *
+   * @param rowData with this syntax (metric=warp.speed.test)
+   * @return Map with params key: metric, value: warp.speed.test
+   */
+  public static Map parseFROMRowData(String rowData) {
--- End diff --

`parseFromRowData`


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146135322
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBGroupScan.java
 ---
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonTypeName;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.ListMultimap;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.exec.physical.EndpointAffinity;
+import org.apache.drill.exec.physical.base.AbstractGroupScan;
+import org.apache.drill.exec.physical.base.GroupScan;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.physical.base.ScanStats;
+import org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import 
org.apache.drill.exec.store.openTSDB.OpenTSDBSubScan.OpenTSDBSubScanSpec;
+import org.apache.drill.exec.store.schedule.AffinityCreator;
+import org.apache.drill.exec.store.schedule.AssignmentCreator;
+import org.apache.drill.exec.store.schedule.CompleteWork;
+import org.apache.drill.exec.store.schedule.EndpointByteMap;
+import org.apache.drill.exec.store.schedule.EndpointByteMapImpl;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+
+@JsonTypeName("openTSDB-scan")
+public class OpenTSDBGroupScan extends AbstractGroupScan {
+
+  private static final long DEFAULT_TABLET_SIZE = 1000;
+
+  private OpenTSDBStoragePluginConfig storagePluginConfig;
+  private OpenTSDBScanSpec openTSDBScanSpec;
+  private OpenTSDBStoragePlugin storagePlugin;
+
+  private ListMultimap assignments;
+  private List columns;
+  private List openTSDBWorkList = Lists.newArrayList();
+  private List affinities;
+
+  private boolean filterPushedDown = false;
+
+  @JsonCreator
+  public OpenTSDBGroupScan(@JsonProperty("openTSDBScanSpec") 
OpenTSDBScanSpec openTSDBScanSpec,
+   @JsonProperty("storage") 
OpenTSDBStoragePluginConfig openTSDBStoragePluginConfig,
+   @JsonProperty("columns") List 
columns,
+   @JacksonInject StoragePluginRegistry 
pluginRegistry) throws IOException, ExecutionSetupException {
+this((OpenTSDBStoragePlugin) 
pluginRegistry.getPlugin(openTSDBStoragePluginConfig), openTSDBScanSpec, 
columns);
+  }
+
+  public OpenTSDBGroupScan(OpenTSDBStoragePlugin storagePlugin,
+   OpenTSDBScanSpec scanSpec, List 
columns) {
+super((String) null);
+this.storagePlugin = storagePlugin;
+this.storagePluginConfig = storagePlugin.getConfig();
+this.openTSDBScanSpec = scanSpec;
+this.columns = columns == null || columns.size() == 0 ? ALL_COLUMNS : 
columns;
+init();
+  }
+
+  /**
+   * Private constructor, used for cloning.
+   *
+   * @param that The OpenTSDBGroupScan to clone
+   */
+  private OpenTSDBGroupScan(OpenTSDBGroupScan that) {
+super((String) null);
+this.columns = that.columns;
+this.openTSDBScanSpec = that.openTSDBScanSpec;
+this.storagePlugin = that.storagePlugin;
+this.storagePluginConfig = that.storagePluginConfig;
+this.filterPushedDown = that.filterPushedDown;
+

[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147582484
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/OpenTSDBTypes.java
 ---
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client;
+
+/**
+ * Types in openTSDB records,
+ * used for converting openTSDB data to Sql representation
+ */
+public enum OpenTSDBTypes {
+  STRING,
+  DOUBLE,
+  TIMESTAMP,
--- End diff --

Comma can be removed.


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147584411
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/schema/OpenTSDBSchemaFactory.java
 ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.schema;
+
+import com.google.common.collect.Maps;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.drill.exec.planner.logical.CreateTableEntry;
+import org.apache.drill.exec.planner.logical.DrillTable;
+import org.apache.drill.exec.planner.logical.DynamicDrillTable;
+import org.apache.drill.exec.store.AbstractSchema;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.SchemaFactory;
+import org.apache.drill.exec.store.openTSDB.DrillOpenTSDBTable;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePlugin;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePluginConfig;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Util.getValidTableName;
+
+public class OpenTSDBSchemaFactory implements SchemaFactory {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBSchemaFactory.class);
+
+  private final String schemaName;
+  private OpenTSDBStoragePlugin plugin;
--- End diff --

final?


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583993
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/services/ServiceImpl.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.services;
+
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDB;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.client.query.Query;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import retrofit2.Retrofit;
+import retrofit2.converter.jackson.JacksonConverterFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.AGGREGATOR;
+import static org.apache.drill.exec.store.openTSDB.Constants.DOWNSAMPLE;
+import static org.apache.drill.exec.store.openTSDB.Constants.METRIC;
+import static org.apache.drill.exec.store.openTSDB.Util.isTableNameValid;
+import static org.apache.drill.exec.store.openTSDB.Util.parseFROMRowData;
+
+public class ServiceImpl implements Service {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(ServiceImpl.class);
+
+  private OpenTSDB client;
+  private Map queryParameters;
+
+  public ServiceImpl(String connectionURL) {
+this.client = new Retrofit.Builder()
+.baseUrl(connectionURL)
+.addConverterFactory(JacksonConverterFactory.create())
+.build()
+.create(OpenTSDB.class);
+  }
+
+  @Override
+  public Set getTablesFromDB() {
+return getAllMetricsByTags();
+  }
+
+  @Override
+  public Set getAllTableNames() {
+return getTableNames();
+  }
+
+  @Override
+  public List getUnfixedColumnsToSchema() {
+Set tables = getAllMetricsByTags();
+List unfixedColumns = new ArrayList<>();
+
+for (MetricDTO table : tables) {
+  for (String tag : table.getTags().keySet()) {
+ColumnDTO tmp = new ColumnDTO(tag, OpenTSDBTypes.STRING);
+if (!unfixedColumns.contains(tmp)) {
+  unfixedColumns.add(tmp);
+}
+  }
+}
+return unfixedColumns;
+  }
+
+  @Override
+  public void setupQueryParameters(String rowData) {
+if (!isTableNameValid(rowData)) {
+  this.queryParameters = parseFROMRowData(rowData);
+} else {
+  Map params = new HashMap<>();
+  params.put(METRIC, rowData);
+  this.queryParameters = params;
+}
+  }
+
+  private Set getAllMetricsByTags() {
+try {
+  return getAllMetricsFromDBByTags();
+} catch (IOException e) {
+  logIOException(e);
+  return Collections.emptySet();
+}
+  }
+
+  private Set getTableNames() {
+try {
+  return client.getAllTablesName().execute().body();
+} catch (IOException e) {
+  e.printStackTrace();
+  return Collections.emptySet();
+}
+  }
+
+  private Set getAllTablesWithSpecialTag(DBQuery base) throws 
IOException {
+return client.getTables(base).execute().body();
+  }
+
+  private Set getAllMetricsFromDBByTags() throws IOException {
+Map tags = new HashMap<>();
+
+Query subQuery = new Query.Builder(queryParameters.get(METRIC))
+   

[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147584453
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/schema/OpenTSDBSchemaFactory.java
 ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.schema;
+
+import com.google.common.collect.Maps;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.drill.exec.planner.logical.CreateTableEntry;
+import org.apache.drill.exec.planner.logical.DrillTable;
+import org.apache.drill.exec.planner.logical.DynamicDrillTable;
+import org.apache.drill.exec.store.AbstractSchema;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.SchemaFactory;
+import org.apache.drill.exec.store.openTSDB.DrillOpenTSDBTable;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePlugin;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePluginConfig;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Util.getValidTableName;
+
+public class OpenTSDBSchemaFactory implements SchemaFactory {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBSchemaFactory.class);
+
+  private final String schemaName;
+  private OpenTSDBStoragePlugin plugin;
+
+  public OpenTSDBSchemaFactory(OpenTSDBStoragePlugin plugin, String 
schemaName) {
+this.plugin = plugin;
+this.schemaName = schemaName;
+  }
+
+  @Override
+  public void registerSchemas(SchemaConfig schemaConfig, SchemaPlus 
parent) throws IOException {
+OpenTSDBTables schema = new OpenTSDBTables(schemaName);
+parent.add(schemaName, schema);
+  }
+
+  class OpenTSDBTables extends AbstractSchema {
+private final Map schemaMap = 
Maps.newHashMap();
+
+OpenTSDBTables(String name) {
+  super(Collections.emptyList(), name);
+}
+
+@Override
+public AbstractSchema getSubSchema(String name) {
+  Set tables;
+  if (!schemaMap.containsKey(name)) {
+tables = plugin.getClient().getAllTableNames();
+schemaMap.put(name, new OpenTSDBDatabaseSchema(tables, this, 
name));
+  }
+  return schemaMap.get(name);
+}
+
+@Override
+public Set getSubSchemaNames() {
+  return Collections.emptySet();
+}
+
+@Override
+public Table getTable(String name) {
+  OpenTSDBScanSpec scanSpec = new OpenTSDBScanSpec(name);
+  name = getValidTableName(name);
+  try {
+return new DrillOpenTSDBTable(schemaName, plugin, new 
Schema(plugin.getClient(), name), scanSpec);
+  } catch (Exception e) {
+log.warn("Failure while retrieving openTSDB table {}", name, e);
+return null;
+  }
+}
+
+@Override
+public Set getTableNames() {
+  return plugin.getClient().getAllTableNames();
+}
+
+@Override
+public CreateTableEntry createNewTable(final String tableName, 
List partitionColumns) {
+  return null;
+}
+
+@Override
+public void dropTable(String tableName) {
--- End diff --

Parent behavior is to fail indication that table dropping is not allowed. 
Why to override with empty method?


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134773
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBRecordReader.java
 ---
@@ -0,0 +1,263 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.ValidationError;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.TypeProtos.MajorType;
+import org.apache.drill.common.types.TypeProtos.MinorType;
+import org.apache.drill.common.types.Types;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.expr.TypeHelper;
+import org.apache.drill.exec.ops.OperatorContext;
+import org.apache.drill.exec.physical.impl.OutputMutator;
+import org.apache.drill.exec.record.MaterializedField;
+import org.apache.drill.exec.store.AbstractRecordReader;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.apache.drill.exec.vector.NullableFloat8Vector;
+import org.apache.drill.exec.vector.NullableTimeStampVector;
+import org.apache.drill.exec.vector.NullableVarCharVector;
+import org.apache.drill.exec.vector.ValueVector;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+public class OpenTSDBRecordReader extends AbstractRecordReader {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBRecordReader.class);
+
+  private static final Map TYPES;
+
+  private Service db;
+
+  private Iterator tableIterator;
+  private OutputMutator output;
+  private ImmutableList projectedCols;
+  private OpenTSDBSubScan.OpenTSDBSubScanSpec subScanSpec;
+
+  OpenTSDBRecordReader(Service client, OpenTSDBSubScan.OpenTSDBSubScanSpec 
subScanSpec,
+   List projectedColumns) throws 
IOException {
+setColumns(projectedColumns);
+this.db = client;
+this.subScanSpec = subScanSpec;
+db.setupQueryParameters(subScanSpec.getTableName());
+log.debug("Scan spec: {}", subScanSpec);
+  }
+
+  @Override
+  public void setup(OperatorContext context, OutputMutator output) throws 
ExecutionSetupException {
+this.output = output;
+Set tables = db.getTablesFromDB();
+if (tables == null || tables.isEmpty()) {
+  throw new ValidationError(String.format("Table '%s' not found or 
it's empty", subScanSpec.getTableName()));
+}
+this.tableIterator = tables.iterator();
+  }
+
+  @Override
+  public int next() {
+try {
+  return processOpenTSDBTablesData();
+} catch (SchemaChangeException e) {
+  log.info(e.toString());
--- End diff --

I guess we should fail on `SchemaChangeException` indicating we don't 
support it rather then returning 0?


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583640
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/services/ServiceImpl.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.services;
+
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDB;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.client.query.Query;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import retrofit2.Retrofit;
+import retrofit2.converter.jackson.JacksonConverterFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.AGGREGATOR;
+import static org.apache.drill.exec.store.openTSDB.Constants.DOWNSAMPLE;
+import static org.apache.drill.exec.store.openTSDB.Constants.METRIC;
+import static org.apache.drill.exec.store.openTSDB.Util.isTableNameValid;
+import static org.apache.drill.exec.store.openTSDB.Util.parseFROMRowData;
+
+public class ServiceImpl implements Service {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(ServiceImpl.class);
+
+  private OpenTSDB client;
+  private Map queryParameters;
+
+  public ServiceImpl(String connectionURL) {
+this.client = new Retrofit.Builder()
+.baseUrl(connectionURL)
+.addConverterFactory(JacksonConverterFactory.create())
+.build()
+.create(OpenTSDB.class);
+  }
+
+  @Override
+  public Set getTablesFromDB() {
+return getAllMetricsByTags();
+  }
+
+  @Override
+  public Set getAllTableNames() {
+return getTableNames();
+  }
+
+  @Override
+  public List getUnfixedColumnsToSchema() {
+Set tables = getAllMetricsByTags();
+List unfixedColumns = new ArrayList<>();
+
+for (MetricDTO table : tables) {
+  for (String tag : table.getTags().keySet()) {
+ColumnDTO tmp = new ColumnDTO(tag, OpenTSDBTypes.STRING);
+if (!unfixedColumns.contains(tmp)) {
+  unfixedColumns.add(tmp);
+}
+  }
+}
+return unfixedColumns;
+  }
+
+  @Override
+  public void setupQueryParameters(String rowData) {
+if (!isTableNameValid(rowData)) {
+  this.queryParameters = parseFROMRowData(rowData);
+} else {
+  Map params = new HashMap<>();
+  params.put(METRIC, rowData);
+  this.queryParameters = params;
+}
+  }
+
+  private Set getAllMetricsByTags() {
+try {
+  return getAllMetricsFromDBByTags();
+} catch (IOException e) {
+  logIOException(e);
--- End diff --

Why do we log error rather than fail? Is failure non critical?


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583342
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/query/DBQuery.java
 ---
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.query;
+
+import java.util.HashSet;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.DEFAULT_TIME;
+
+/**
+ * DBQuery is an abstraction of an openTSDB query,
+ * that used for extracting data from the storage system by POST request 
to DB.
+ * 
+ * An OpenTSDB query requires at least one sub query,
+ * a means of selecting which time series should be included in the result 
set.
+ */
+public class DBQuery {
+
+  /**
+   * The start time for the query. This can be a relative or absolute 
timestamp.
+   */
+  private String start;
+  /**
+   * One or more sub subQueries used to select the time series to return.
+   */
+  private Set queries;
+
+  private DBQuery(Builder builder) {
+this.start = builder.start;
+this.queries = builder.queries;
+  }
+
+  public String getStart() {
+return start;
+  }
+
+  public Set getQueries() {
+return queries;
+  }
+
+  public static class Builder {
+
+private String start = DEFAULT_TIME;
--- End diff --

According to the documentation start time is required parameter, I we 
should not use our made up default but rather fail and ask user to provide 
start time.

[1] http://opentsdb.net/docs/build/html/api_http/query/index.html


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147582817
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBStoragePlugin.java
 ---
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.drill.common.JSONOptions;
+import org.apache.drill.exec.server.DrillbitContext;
+import org.apache.drill.exec.store.AbstractStoragePlugin;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.openTSDB.client.services.ServiceImpl;
+import org.apache.drill.exec.store.openTSDB.schema.OpenTSDBSchemaFactory;
+
+import java.io.IOException;
+
+public class OpenTSDBStoragePlugin extends AbstractStoragePlugin {
+
+  private final DrillbitContext context;
+
+  private final OpenTSDBStoragePluginConfig engineConfig;
+  private final OpenTSDBSchemaFactory schemaFactory;
+
+  private final ServiceImpl db;
+
+  public OpenTSDBStoragePlugin(OpenTSDBStoragePluginConfig configuration, 
DrillbitContext context, String name) throws IOException {
+this.context = context;
+this.schemaFactory = new OpenTSDBSchemaFactory(this, name);
+this.engineConfig = configuration;
+this.db = new ServiceImpl("http://; + configuration.getConnection());
+  }
+
+  @Override
+  public void start() throws IOException {
--- End diff --

There is no need to override `start` and `close` methods since they have 
the same behavior as parent...


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133035
  
--- Diff: contrib/storage-opentsdb/src/test/resources/logback.xml ---
@@ -0,0 +1,64 @@
+
--- End diff --

Now in Drill we have common logback-test.xml for all modules, please use it 
instead of creating separate one.


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134878
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/schema/OpenTSDBSchemaFactory.java
 ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.schema;
+
+import com.google.common.collect.Maps;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.drill.exec.planner.logical.CreateTableEntry;
+import org.apache.drill.exec.planner.logical.DrillTable;
+import org.apache.drill.exec.planner.logical.DynamicDrillTable;
+import org.apache.drill.exec.store.AbstractSchema;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.SchemaFactory;
+import org.apache.drill.exec.store.openTSDB.DrillOpenTSDBTable;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePlugin;
+import org.apache.drill.exec.store.openTSDB.OpenTSDBStoragePluginConfig;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Util.getValidTableName;
+
+public class OpenTSDBSchemaFactory implements SchemaFactory {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBSchemaFactory.class);
+
+  private final String schemaName;
+  private OpenTSDBStoragePlugin plugin;
+
+  public OpenTSDBSchemaFactory(OpenTSDBStoragePlugin plugin, String 
schemaName) {
+this.plugin = plugin;
+this.schemaName = schemaName;
+  }
+
+  @Override
+  public void registerSchemas(SchemaConfig schemaConfig, SchemaPlus 
parent) throws IOException {
+OpenTSDBTables schema = new OpenTSDBTables(schemaName);
+parent.add(schemaName, schema);
+  }
+
+  class OpenTSDBTables extends AbstractSchema {
+private final Map schemaMap = 
Maps.newHashMap();
+
+OpenTSDBTables(String name) {
+  super(Collections.emptyList(), name);
+}
+
+@Override
+public AbstractSchema getSubSchema(String name) {
+  Set tables;
+  if (!schemaMap.containsKey(name)) {
+tables = plugin.getClient().getAllTableNames();
+schemaMap.put(name, new OpenTSDBDatabaseSchema(tables, this, 
name));
+  }
+  return schemaMap.get(name);
+}
+
+@Override
+public Set getSubSchemaNames() {
+  return Collections.emptySet();
+}
+
+@Override
+public Table getTable(String name) {
+  OpenTSDBScanSpec scanSpec = new OpenTSDBScanSpec(name);
+  name = getValidTableName(name);
+  try {
+return new DrillOpenTSDBTable(schemaName, plugin, new 
Schema(plugin.getClient(), name), scanSpec);
+  } catch (Exception e) {
+log.warn("Failure while retrieving openTSDB table {}", name, e);
--- End diff --

What will happen if we return null? Will user get meaningful error? Maybe 
we should fail here? Or at least log as error rather then warning?


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583417
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/query/DBQuery.java
 ---
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.query;
+
+import java.util.HashSet;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.DEFAULT_TIME;
+
+/**
+ * DBQuery is an abstraction of an openTSDB query,
+ * that used for extracting data from the storage system by POST request 
to DB.
+ * 
+ * An OpenTSDB query requires at least one sub query,
+ * a means of selecting which time series should be included in the result 
set.
+ */
+public class DBQuery {
+
+  /**
+   * The start time for the query. This can be a relative or absolute 
timestamp.
+   */
+  private String start;
+  /**
+   * One or more sub subQueries used to select the time series to return.
+   */
+  private Set queries;
+
+  private DBQuery(Builder builder) {
+this.start = builder.start;
+this.queries = builder.queries;
+  }
+
+  public String getStart() {
+return start;
+  }
+
+  public Set getQueries() {
+return queries;
+  }
+
+  public static class Builder {
+
+private String start = DEFAULT_TIME;
+private Set queries = new HashSet<>();
--- End diff --

So what will happen is queries object will be empty? If eventually we will 
fail, maybe it's better to fail in `Builder`?


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133289
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/OpenTSDB.java
 ---
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client;
+
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import retrofit2.Call;
+import retrofit2.http.Body;
+import retrofit2.http.GET;
+import retrofit2.http.POST;
+import retrofit2.http.Query;
+
+import java.util.Set;
+
+/**
+ * Client for API requests to openTSDB
+ */
+public interface OpenTSDB {
+
+  /**
+   * Used for getting all metrics names from openTSDB
+   *
+   * @return Set with all tables names
+   */
+  @GET("api/suggest?type=metrics=999")
--- End diff --

Why max value is 999? What is we have more metrics? If it is required can 
we make it configurable?


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134351
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/services/ServiceImpl.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.services;
+
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDB;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.client.query.Query;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import retrofit2.Retrofit;
+import retrofit2.converter.jackson.JacksonConverterFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.AGGREGATOR;
+import static org.apache.drill.exec.store.openTSDB.Constants.DOWNSAMPLE;
+import static org.apache.drill.exec.store.openTSDB.Constants.METRIC;
+import static org.apache.drill.exec.store.openTSDB.Util.isTableNameValid;
+import static org.apache.drill.exec.store.openTSDB.Util.parseFROMRowData;
+
+public class ServiceImpl implements Service {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(ServiceImpl.class);
+
+  private OpenTSDB client;
--- End diff --

final?


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147582692
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBRecordReader.java
 ---
@@ -0,0 +1,263 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.ValidationError;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.TypeProtos.MajorType;
+import org.apache.drill.common.types.TypeProtos.MinorType;
+import org.apache.drill.common.types.Types;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.expr.TypeHelper;
+import org.apache.drill.exec.ops.OperatorContext;
+import org.apache.drill.exec.physical.impl.OutputMutator;
+import org.apache.drill.exec.record.MaterializedField;
+import org.apache.drill.exec.store.AbstractRecordReader;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.apache.drill.exec.vector.NullableFloat8Vector;
+import org.apache.drill.exec.vector.NullableTimeStampVector;
+import org.apache.drill.exec.vector.NullableVarCharVector;
+import org.apache.drill.exec.vector.ValueVector;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+public class OpenTSDBRecordReader extends AbstractRecordReader {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBRecordReader.class);
+
+  private static final Map TYPES;
+
+  private Service db;
+
+  private Iterator tableIterator;
+  private OutputMutator output;
+  private ImmutableList projectedCols;
+  private OpenTSDBSubScan.OpenTSDBSubScanSpec subScanSpec;
+
+  OpenTSDBRecordReader(Service client, OpenTSDBSubScan.OpenTSDBSubScanSpec 
subScanSpec,
+   List projectedColumns) throws 
IOException {
+setColumns(projectedColumns);
+this.db = client;
+this.subScanSpec = subScanSpec;
+db.setupQueryParameters(subScanSpec.getTableName());
+log.debug("Scan spec: {}", subScanSpec);
+  }
+
+  @Override
+  public void setup(OperatorContext context, OutputMutator output) throws 
ExecutionSetupException {
+this.output = output;
+Set tables = db.getTablesFromDB();
+if (tables == null || tables.isEmpty()) {
+  throw new ValidationError(String.format("Table '%s' not found or 
it's empty", subScanSpec.getTableName()));
+}
+this.tableIterator = tables.iterator();
+  }
+
+  @Override
+  public int next() {
+try {
+  return processOpenTSDBTablesData();
+} catch (SchemaChangeException e) {
+  log.info(e.toString());
+  return 0;
+}
+  }
+
+  @Override
+  protected boolean isSkipQuery() {
--- End diff --

Do you need to override this method as you just call super inside?


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146135774
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBScanSpec.java
 ---
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+public class OpenTSDBScanSpec {
+
+  private final String tableName;
+
+  @JsonCreator
+  public OpenTSDBScanSpec(@JsonProperty("tableName") String tableName) {
+this.tableName = tableName;
+  }
+
+  public String getTableName() {
+return tableName;
+  }
--- End diff --

If you'll do expain plan for the query:
```
0: jdbc:drill:drillbit=localhost> explain plan for SELECT * FROM 
openTSDB.`(metric=mymetric.stock)`;
+--+--+
| text | json |
+--+--+
| 00-00Screen
00-01  Project(metric=[$0], aggregate tags=[$1], timestamp=[$2], 
aggregated value=[$3], symbol=[$4])
00-02Scan(groupscan=[OpenTSDBGroupScan 
[OpenTSDBScanSpec=org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec@57979642]])
 | {
  "head" : {
"version" : 1,
"generator" : {
  "type" : "ExplainHandler",
  "info" : ""
},
"type" : "APACHE_DRILL_PHYSICAL",
"options" : [ ],
"queue" : 0,
"hasResourcePlan" : false,
"resultMode" : "EXEC"
  },
  "graph" : [ {
"pop" : "openTSDB-scan",
"@id" : 2,
"openTSDBScanSpec" : {
  "tableName" : "(metric=mymetric.stock)"
},
"cost" : 0.0
  }, {
"pop" : "project",
"@id" : 1,
"exprs" : [ {
  "ref" : "`metric`",
  "expr" : "`metric`"
}, {
  "ref" : "`aggregate tags`",
  "expr" : "`aggregate tags`"
}, {
  "ref" : "`timestamp`",
  "expr" : "`timestamp`"
}, {
  "ref" : "`aggregated value`",
  "expr" : "`aggregated value`"
}, {
  "ref" : "`symbol`",
  "expr" : "`symbol`"
} ],
"child" : 2,
"outputProj" : true,
"initialAllocation" : 100,
"maxAllocation" : 100,
"cost" : 1.0
  }, {
"pop" : "screen",
"@id" : 0,
"child" : 1,
"initialAllocation" : 100,
"maxAllocation" : 100,
"cost" : 1.0
  } ]
} |
+--+--+
```
you'll see 
`OpenTSDBScanSpec=org.apache.drill.exec.store.openTSDB.OpenTSDBScanSpec@57979642`.
I guess we might want to add `toString()` method in this class.


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146135479
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/DrillOpenTSDBTable.java
 ---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.Lists;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.sql.type.SqlTypeName;
+import org.apache.drill.exec.planner.logical.DynamicDrillTable;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+
+import java.util.List;
+
+public class DrillOpenTSDBTable extends DynamicDrillTable {
+
+  private final Schema schema;
+
+  public DrillOpenTSDBTable(String storageEngineName, 
OpenTSDBStoragePlugin plugin, Schema schema, OpenTSDBScanSpec scanSpec) {
+super(plugin, storageEngineName, scanSpec);
+this.schema = schema;
+  }
+
+  @Override
+  public RelDataType getRowType(final RelDataTypeFactory typeFactory) {
+List names = Lists.newArrayList();
+List types = Lists.newArrayList();
+convertToRelDataType(typeFactory, names, types);
+return typeFactory.createStructType(types, names);
+  }
+
+  private void convertToRelDataType(RelDataTypeFactory typeFactory, 
List names, List types) {
+for (ColumnDTO column : schema.getColumns()) {
+  names.add(column.getColumnName());
+  RelDataType type = getSqlTypeFromOpenTSDBType(typeFactory, 
column.getColumnType());
+  type = typeFactory.createTypeWithNullability(type, 
column.isNullable());
+  types.add(type);
+}
+  }
+
+  private RelDataType getSqlTypeFromOpenTSDBType(RelDataTypeFactory 
typeFactory, OpenTSDBTypes type) {
+switch (type) {
+  case STRING:
+return typeFactory.createSqlType(SqlTypeName.VARCHAR, 
Integer.MAX_VALUE);
+  case DOUBLE:
+return typeFactory.createSqlType(SqlTypeName.DOUBLE);
+  case TIMESTAMP:
+return typeFactory.createSqlType(SqlTypeName.TIMESTAMP);
+  default:
+throw new UnsupportedOperationException("Unsupported type.");
--- End diff --

I guess it's better to add error message indicating which type is 
unsupported and which types are currently supported.


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147584018
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/services/ServiceImpl.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.services;
+
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDB;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.client.query.Query;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import retrofit2.Retrofit;
+import retrofit2.converter.jackson.JacksonConverterFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.AGGREGATOR;
+import static org.apache.drill.exec.store.openTSDB.Constants.DOWNSAMPLE;
+import static org.apache.drill.exec.store.openTSDB.Constants.METRIC;
+import static org.apache.drill.exec.store.openTSDB.Util.isTableNameValid;
+import static org.apache.drill.exec.store.openTSDB.Util.parseFROMRowData;
+
+public class ServiceImpl implements Service {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(ServiceImpl.class);
+
+  private OpenTSDB client;
+  private Map queryParameters;
+
+  public ServiceImpl(String connectionURL) {
+this.client = new Retrofit.Builder()
+.baseUrl(connectionURL)
+.addConverterFactory(JacksonConverterFactory.create())
+.build()
+.create(OpenTSDB.class);
+  }
+
+  @Override
+  public Set getTablesFromDB() {
+return getAllMetricsByTags();
+  }
+
+  @Override
+  public Set getAllTableNames() {
+return getTableNames();
+  }
+
+  @Override
+  public List getUnfixedColumnsToSchema() {
+Set tables = getAllMetricsByTags();
+List unfixedColumns = new ArrayList<>();
+
+for (MetricDTO table : tables) {
+  for (String tag : table.getTags().keySet()) {
+ColumnDTO tmp = new ColumnDTO(tag, OpenTSDBTypes.STRING);
+if (!unfixedColumns.contains(tmp)) {
+  unfixedColumns.add(tmp);
+}
+  }
+}
+return unfixedColumns;
+  }
+
+  @Override
+  public void setupQueryParameters(String rowData) {
+if (!isTableNameValid(rowData)) {
+  this.queryParameters = parseFROMRowData(rowData);
+} else {
+  Map params = new HashMap<>();
+  params.put(METRIC, rowData);
+  this.queryParameters = params;
+}
+  }
+
+  private Set getAllMetricsByTags() {
+try {
+  return getAllMetricsFromDBByTags();
+} catch (IOException e) {
+  logIOException(e);
+  return Collections.emptySet();
+}
+  }
+
+  private Set getTableNames() {
+try {
+  return client.getAllTablesName().execute().body();
+} catch (IOException e) {
+  e.printStackTrace();
+  return Collections.emptySet();
+}
+  }
+
+  private Set getAllTablesWithSpecialTag(DBQuery base) throws 
IOException {
+return client.getTables(base).execute().body();
+  }
+
+  private Set getAllMetricsFromDBByTags() throws IOException {
+Map tags = new HashMap<>();
+
+Query subQuery = new Query.Builder(queryParameters.get(METRIC))
+   

[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146135112
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBGroupScan.java
 ---
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonTypeName;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.ListMultimap;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.exec.physical.EndpointAffinity;
+import org.apache.drill.exec.physical.base.AbstractGroupScan;
+import org.apache.drill.exec.physical.base.GroupScan;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.physical.base.ScanStats;
+import org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import 
org.apache.drill.exec.store.openTSDB.OpenTSDBSubScan.OpenTSDBSubScanSpec;
+import org.apache.drill.exec.store.schedule.AffinityCreator;
+import org.apache.drill.exec.store.schedule.AssignmentCreator;
+import org.apache.drill.exec.store.schedule.CompleteWork;
+import org.apache.drill.exec.store.schedule.EndpointByteMap;
+import org.apache.drill.exec.store.schedule.EndpointByteMapImpl;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+
+@JsonTypeName("openTSDB-scan")
+public class OpenTSDBGroupScan extends AbstractGroupScan {
+
+  private static final long DEFAULT_TABLET_SIZE = 1000;
+
+  private OpenTSDBStoragePluginConfig storagePluginConfig;
+  private OpenTSDBScanSpec openTSDBScanSpec;
+  private OpenTSDBStoragePlugin storagePlugin;
+
+  private ListMultimap assignments;
+  private List columns;
+  private List openTSDBWorkList = Lists.newArrayList();
+  private List affinities;
+
+  private boolean filterPushedDown = false;
+
+  @JsonCreator
+  public OpenTSDBGroupScan(@JsonProperty("openTSDBScanSpec") 
OpenTSDBScanSpec openTSDBScanSpec,
+   @JsonProperty("storage") 
OpenTSDBStoragePluginConfig openTSDBStoragePluginConfig,
+   @JsonProperty("columns") List 
columns,
+   @JacksonInject StoragePluginRegistry 
pluginRegistry) throws IOException, ExecutionSetupException {
+this((OpenTSDBStoragePlugin) 
pluginRegistry.getPlugin(openTSDBStoragePluginConfig), openTSDBScanSpec, 
columns);
+  }
+
+  public OpenTSDBGroupScan(OpenTSDBStoragePlugin storagePlugin,
+   OpenTSDBScanSpec scanSpec, List 
columns) {
+super((String) null);
+this.storagePlugin = storagePlugin;
+this.storagePluginConfig = storagePlugin.getConfig();
+this.openTSDBScanSpec = scanSpec;
+this.columns = columns == null || columns.size() == 0 ? ALL_COLUMNS : 
columns;
+init();
+  }
+
+  /**
+   * Private constructor, used for cloning.
+   *
+   * @param that The OpenTSDBGroupScan to clone
+   */
+  private OpenTSDBGroupScan(OpenTSDBGroupScan that) {
+super((String) null);
+this.columns = that.columns;
+this.openTSDBScanSpec = that.openTSDBScanSpec;
+this.storagePlugin = that.storagePlugin;
+this.storagePluginConfig = that.storagePluginConfig;
+this.filterPushedDown = that.filterPushedDown;
+

[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147582988
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBSubScan.java
 ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonTypeName;
+import com.google.common.base.Preconditions;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.StoragePluginConfig;
+import org.apache.drill.exec.physical.base.AbstractBase;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.physical.base.PhysicalVisitor;
+import org.apache.drill.exec.physical.base.SubScan;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+
+@JsonTypeName("openTSDB-tablet-scan")
+public class OpenTSDBSubScan extends AbstractBase implements SubScan {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(OpenTSDBSubScan.class);
+
+  @JsonProperty
+  public final OpenTSDBStoragePluginConfig storage;
+
+  private final List columns;
+  private final OpenTSDBStoragePlugin openTSDBStoragePlugin;
+  private final List tabletScanSpecList;
+
+  @JsonCreator
+  public OpenTSDBSubScan(@JacksonInject StoragePluginRegistry registry,
+ @JsonProperty("storage") StoragePluginConfig 
storage,
+ @JsonProperty("tabletScanSpecList") 
LinkedList tabletScanSpecList,
+ @JsonProperty("columns") List 
columns) throws ExecutionSetupException {
+super((String) null);
+openTSDBStoragePlugin = (OpenTSDBStoragePlugin) 
registry.getPlugin(storage);
+this.tabletScanSpecList = tabletScanSpecList;
+this.storage = (OpenTSDBStoragePluginConfig) storage;
--- End diff --

Should consider using ` @JsonProperty("storage") 
OpenTSDBStoragePluginConfig storage` instead of casting.


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133068
  
--- Diff: distribution/src/assemble/bin.xml ---
@@ -99,7 +99,9 @@
 org.apache.drill.contrib:drill-format-mapr
 org.apache.drill.contrib:drill-jdbc-storage
 org.apache.drill.contrib:drill-kudu-storage
+org.apache.drill.contrib:drill-opentsdb-storage
--- End diff --

Again we have 
`org.apache.drill.contrib:drill-opentsdb-storage` twice in 
here...


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133090
  
--- Diff: exec/java-exec/src/test/resources/drill-module.conf ---
@@ -10,7 +10,7 @@ drill: {
 packages += "org.apache.drill.exec.testing",
 packages += "org.apache.drill.exec.rpc.user.security.testing"
   }
-  test.query.printing.silent : false, 
+  test.query.printing.silent : false,
--- End diff --

I guess change in this file is not required...


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146135440
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBGroupScan.java
 ---
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonTypeName;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.ListMultimap;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.exec.physical.EndpointAffinity;
+import org.apache.drill.exec.physical.base.AbstractGroupScan;
+import org.apache.drill.exec.physical.base.GroupScan;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.physical.base.ScanStats;
+import org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import 
org.apache.drill.exec.store.openTSDB.OpenTSDBSubScan.OpenTSDBSubScanSpec;
+import org.apache.drill.exec.store.schedule.AffinityCreator;
+import org.apache.drill.exec.store.schedule.AssignmentCreator;
+import org.apache.drill.exec.store.schedule.CompleteWork;
+import org.apache.drill.exec.store.schedule.EndpointByteMap;
+import org.apache.drill.exec.store.schedule.EndpointByteMapImpl;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+
+@JsonTypeName("openTSDB-scan")
+public class OpenTSDBGroupScan extends AbstractGroupScan {
+
+  private static final long DEFAULT_TABLET_SIZE = 1000;
+
+  private OpenTSDBStoragePluginConfig storagePluginConfig;
+  private OpenTSDBScanSpec openTSDBScanSpec;
+  private OpenTSDBStoragePlugin storagePlugin;
+
+  private ListMultimap assignments;
+  private List columns;
+  private List openTSDBWorkList = Lists.newArrayList();
+  private List affinities;
+
+  private boolean filterPushedDown = false;
+
+  @JsonCreator
+  public OpenTSDBGroupScan(@JsonProperty("openTSDBScanSpec") 
OpenTSDBScanSpec openTSDBScanSpec,
+   @JsonProperty("storage") 
OpenTSDBStoragePluginConfig openTSDBStoragePluginConfig,
+   @JsonProperty("columns") List 
columns,
--- End diff --

How this class is deserialized if you don't have getters for `storage` and 
`columns`?
One of the options to test serde is to create physical plan for your query 
and then submit physical plan (similar approach was used in 
https://github.com/apache/drill/pull/1014).


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146135302
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBRecordReader.java
 ---
@@ -0,0 +1,263 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.ValidationError;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.TypeProtos.MajorType;
+import org.apache.drill.common.types.TypeProtos.MinorType;
+import org.apache.drill.common.types.Types;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.expr.TypeHelper;
+import org.apache.drill.exec.ops.OperatorContext;
+import org.apache.drill.exec.physical.impl.OutputMutator;
+import org.apache.drill.exec.record.MaterializedField;
+import org.apache.drill.exec.store.AbstractRecordReader;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Schema;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.apache.drill.exec.vector.NullableFloat8Vector;
+import org.apache.drill.exec.vector.NullableTimeStampVector;
+import org.apache.drill.exec.vector.NullableVarCharVector;
+import org.apache.drill.exec.vector.ValueVector;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+public class OpenTSDBRecordReader extends AbstractRecordReader {
+
+  private static final Logger log = 
LoggerFactory.getLogger(OpenTSDBRecordReader.class);
+
+  private static final Map TYPES;
+
+  private Service db;
+
+  private Iterator tableIterator;
+  private OutputMutator output;
+  private ImmutableList projectedCols;
+  private OpenTSDBSubScan.OpenTSDBSubScanSpec subScanSpec;
+
+  OpenTSDBRecordReader(Service client, OpenTSDBSubScan.OpenTSDBSubScanSpec 
subScanSpec,
+   List projectedColumns) throws 
IOException {
+setColumns(projectedColumns);
+this.db = client;
+this.subScanSpec = subScanSpec;
+db.setupQueryParameters(subScanSpec.getTableName());
+log.debug("Scan spec: {}", subScanSpec);
+  }
+
+  @Override
+  public void setup(OperatorContext context, OutputMutator output) throws 
ExecutionSetupException {
+this.output = output;
+Set tables = db.getTablesFromDB();
+if (tables == null || tables.isEmpty()) {
+  throw new ValidationError(String.format("Table '%s' not found or 
it's empty", subScanSpec.getTableName()));
+}
+this.tableIterator = tables.iterator();
+  }
+
+  @Override
+  public int next() {
+try {
+  return processOpenTSDBTablesData();
+} catch (SchemaChangeException e) {
+  log.info(e.toString());
+  return 0;
+}
+  }
+
+  @Override
+  protected boolean isSkipQuery() {
+return super.isSkipQuery();
+  }
+
+  @Override
+  public void close() throws Exception {
+  }
+
+  static {
+TYPES = ImmutableMap.builder()
+.put(OpenTSDBTypes.STRING, MinorType.VARCHAR)
+.put(OpenTSDBTypes.DOUBLE, 

[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134207
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/services/ServiceImpl.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.services;
+
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDB;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.client.query.Query;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import retrofit2.Retrofit;
+import retrofit2.converter.jackson.JacksonConverterFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.AGGREGATOR;
+import static org.apache.drill.exec.store.openTSDB.Constants.DOWNSAMPLE;
+import static org.apache.drill.exec.store.openTSDB.Constants.METRIC;
+import static org.apache.drill.exec.store.openTSDB.Util.isTableNameValid;
+import static org.apache.drill.exec.store.openTSDB.Util.parseFROMRowData;
+
+public class ServiceImpl implements Service {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(ServiceImpl.class);
+
+  private OpenTSDB client;
+  private Map queryParameters;
+
+  public ServiceImpl(String connectionURL) {
+this.client = new Retrofit.Builder()
+.baseUrl(connectionURL)
+.addConverterFactory(JacksonConverterFactory.create())
+.build()
+.create(OpenTSDB.class);
+  }
+
+  @Override
+  public Set getTablesFromDB() {
+return getAllMetricsByTags();
+  }
+
+  @Override
+  public Set getAllTableNames() {
+return getTableNames();
+  }
+
+  @Override
+  public List getUnfixedColumnsToSchema() {
+Set tables = getAllMetricsByTags();
+List unfixedColumns = new ArrayList<>();
+
+for (MetricDTO table : tables) {
+  for (String tag : table.getTags().keySet()) {
+ColumnDTO tmp = new ColumnDTO(tag, OpenTSDBTypes.STRING);
+if (!unfixedColumns.contains(tmp)) {
+  unfixedColumns.add(tmp);
+}
+  }
+}
+return unfixedColumns;
+  }
+
+  @Override
+  public void setupQueryParameters(String rowData) {
+if (!isTableNameValid(rowData)) {
+  this.queryParameters = parseFROMRowData(rowData);
+} else {
+  Map params = new HashMap<>();
+  params.put(METRIC, rowData);
+  this.queryParameters = params;
+}
+  }
+
+  private Set getAllMetricsByTags() {
+try {
+  return getAllMetricsFromDBByTags();
+} catch (IOException e) {
+  logIOException(e);
+  return Collections.emptySet();
+}
+  }
+
+  private Set getTableNames() {
+try {
+  return client.getAllTablesName().execute().body();
+} catch (IOException e) {
+  e.printStackTrace();
+  return Collections.emptySet();
+}
+  }
+
+  private Set getAllTablesWithSpecialTag(DBQuery base) throws 
IOException {
+return client.getTables(base).execute().body();
+  }
+
+  private Set getAllMetricsFromDBByTags() throws IOException {
+Map tags = new HashMap<>();
+
+Query subQuery = new Query.Builder(queryParameters.get(METRIC))
+   

[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147582378
  
--- Diff: contrib/storage-opentsdb/README.md ---
@@ -0,0 +1 @@
+# drill-storage-openTSDB
--- End diff --

It would be nice if you add some information how to query data, etc, 
similar that is added in the pull request description.


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133808
  
--- Diff: 
contrib/storage-opentsdb/src/main/resources/bootstrap-storage-plugins.json ---
@@ -0,0 +1,9 @@
+{
+  "storage": {
+openTSDB: {
+  type: "openTSDB",
+  connection: "1.2.3.4:4242",
+  enabled: true
--- End diff --

1. Can user have secure connection?
2. I guess it would be better if by default this storage plugin will be 
disabled.


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146134226
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/client/services/ServiceImpl.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB.client.services;
+
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDB;
+import org.apache.drill.exec.store.openTSDB.client.OpenTSDBTypes;
+import org.apache.drill.exec.store.openTSDB.client.Service;
+import org.apache.drill.exec.store.openTSDB.client.query.DBQuery;
+import org.apache.drill.exec.store.openTSDB.client.query.Query;
+import org.apache.drill.exec.store.openTSDB.dto.ColumnDTO;
+import org.apache.drill.exec.store.openTSDB.dto.MetricDTO;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import retrofit2.Retrofit;
+import retrofit2.converter.jackson.JacksonConverterFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.drill.exec.store.openTSDB.Constants.AGGREGATOR;
+import static org.apache.drill.exec.store.openTSDB.Constants.DOWNSAMPLE;
+import static org.apache.drill.exec.store.openTSDB.Constants.METRIC;
+import static org.apache.drill.exec.store.openTSDB.Util.isTableNameValid;
+import static org.apache.drill.exec.store.openTSDB.Util.parseFROMRowData;
+
+public class ServiceImpl implements Service {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(ServiceImpl.class);
+
+  private OpenTSDB client;
+  private Map queryParameters;
+
+  public ServiceImpl(String connectionURL) {
+this.client = new Retrofit.Builder()
+.baseUrl(connectionURL)
+.addConverterFactory(JacksonConverterFactory.create())
+.build()
+.create(OpenTSDB.class);
+  }
+
+  @Override
+  public Set getTablesFromDB() {
+return getAllMetricsByTags();
+  }
+
+  @Override
+  public Set getAllTableNames() {
+return getTableNames();
+  }
+
+  @Override
+  public List getUnfixedColumnsToSchema() {
+Set tables = getAllMetricsByTags();
+List unfixedColumns = new ArrayList<>();
+
+for (MetricDTO table : tables) {
+  for (String tag : table.getTags().keySet()) {
+ColumnDTO tmp = new ColumnDTO(tag, OpenTSDBTypes.STRING);
+if (!unfixedColumns.contains(tmp)) {
+  unfixedColumns.add(tmp);
+}
+  }
+}
+return unfixedColumns;
+  }
+
+  @Override
+  public void setupQueryParameters(String rowData) {
+if (!isTableNameValid(rowData)) {
+  this.queryParameters = parseFROMRowData(rowData);
+} else {
+  Map params = new HashMap<>();
+  params.put(METRIC, rowData);
+  this.queryParameters = params;
+}
+  }
+
+  private Set getAllMetricsByTags() {
+try {
+  return getAllMetricsFromDBByTags();
+} catch (IOException e) {
+  logIOException(e);
+  return Collections.emptySet();
+}
+  }
+
+  private Set getTableNames() {
+try {
+  return client.getAllTablesName().execute().body();
+} catch (IOException e) {
+  e.printStackTrace();
+  return Collections.emptySet();
--- End diff --

1. Why do we return empty collection on error?
2. `e.printStackTrace();`?


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r147583175
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/OpenTSDBSubScan.java
 ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonTypeName;
+import com.google.common.base.Preconditions;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.StoragePluginConfig;
+import org.apache.drill.exec.physical.base.AbstractBase;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.physical.base.PhysicalVisitor;
+import org.apache.drill.exec.physical.base.SubScan;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+
+@JsonTypeName("openTSDB-tablet-scan")
+public class OpenTSDBSubScan extends AbstractBase implements SubScan {
+
+  private static final Logger log =
+  LoggerFactory.getLogger(OpenTSDBSubScan.class);
+
+  @JsonProperty
+  public final OpenTSDBStoragePluginConfig storage;
+
+  private final List columns;
+  private final OpenTSDBStoragePlugin openTSDBStoragePlugin;
+  private final List tabletScanSpecList;
+
+  @JsonCreator
+  public OpenTSDBSubScan(@JacksonInject StoragePluginRegistry registry,
+ @JsonProperty("storage") StoragePluginConfig 
storage,
+ @JsonProperty("tabletScanSpecList") 
LinkedList tabletScanSpecList,
+ @JsonProperty("columns") List 
columns) throws ExecutionSetupException {
+super((String) null);
+openTSDBStoragePlugin = (OpenTSDBStoragePlugin) 
registry.getPlugin(storage);
+this.tabletScanSpecList = tabletScanSpecList;
+this.storage = (OpenTSDBStoragePluginConfig) storage;
+this.columns = columns;
+  }
+
+  public OpenTSDBSubScan(OpenTSDBStoragePlugin plugin, 
OpenTSDBStoragePluginConfig config,
+ List tabletInfoList, 
List columns) {
+super((String) null);
+openTSDBStoragePlugin = plugin;
+storage = config;
+this.tabletScanSpecList = tabletInfoList;
+this.columns = columns;
+  }
+
+  @Override
+  public int getOperatorType() {
+return 0;
+  }
+
+  @Override
+  public boolean isExecutable() {
+return false;
+  }
+
+  @Override
+  public PhysicalOperator getNewWithChildren(List 
children) throws ExecutionSetupException {
+Preconditions.checkArgument(children.isEmpty());
+return new OpenTSDBSubScan(openTSDBStoragePlugin, storage, 
tabletScanSpecList, columns);
+  }
+
+  @Override
+  public Iterator iterator() {
+return Collections.emptyIterator();
+  }
+
+  @Override
+  public  T accept(PhysicalVisitor 
physicalVisitor, X value) throws E {
+return physicalVisitor.visitSubScan(this, value);
+  }
+
+  public List getColumns() {
+return columns;
+  }
+
+  public List getTabletScanSpecList() {
+return tabletScanSpecList;
+  }
+
+  @JsonIgnore
+  public OpenTSDBStoragePlugin getStorageEngine() {
+return openTSDBStoragePlugin;
+  }
+
+  @JsonIgnore
+  public OpenTSDBStoragePluginConfig getStorageConfig() {
--- End diff --

Please remove 

[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133164
  
--- Diff: 
contrib/storage-opentsdb/src/main/java/org/apache/drill/exec/store/openTSDB/Constants.java
 ---
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.openTSDB;
+
+public class Constants {
--- End diff --

interface?


---


[GitHub] drill pull request #774: DRILL-5337: OpenTSDB storage plugin

2017-10-30 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/774#discussion_r146133055
  
--- Diff: distribution/pom.xml ---
@@ -190,11 +190,21 @@
 
 
   org.apache.drill.contrib
+  drill-opentsdb-storage
--- End diff --

Why we have added `drill-opentsdb-storage` twice in this pom.xml?


---


[GitHub] drill issue #1013: DRILL-5910: Logging exception when custom AuthenticatorFa...

2017-10-30 Thread vladimirtkach
Github user vladimirtkach commented on the issue:

https://github.com/apache/drill/pull/1013
  
@vrozov made the changes, please review.


---


[GitHub] drill pull request #1013: DRILL-5910: Add factory name to ClassNotFoundExcep...

2017-10-30 Thread vladimirtkach
Github user vladimirtkach commented on a diff in the pull request:

https://github.com/apache/drill/pull/1013#discussion_r147655905
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/security/ClientAuthenticatorProvider.java
 ---
@@ -57,17 +57,17 @@ private ClientAuthenticatorProvider() {
 
 // then, custom factories
 if (customFactories != null) {
-  try {
-final String[] factories = customFactories.split(",");
-for (final String factory : factories) {
+  final String[] factories = customFactories.split(",");
+  for (final String factory : factories) {
+try {
   final Class clazz = Class.forName(factory);
   if (AuthenticatorFactory.class.isAssignableFrom(clazz)) {
 final AuthenticatorFactory instance = (AuthenticatorFactory) 
clazz.newInstance();
 authFactories.put(instance.getSimpleName(), instance);
   }
+} catch (final ClassNotFoundException | IllegalAccessException | 
InstantiationException e) {
--- End diff --

a moot point, I think that three specific exceptions is better the one 
general.   


---