Re: Does avatica support a date type?

2016-04-07 Thread F21
Awesome! Thanks for the link to TypedValue.java, it was extremely useful 
:) I've also assigned the JIRA to you :)


On 8/04/2016 12:26 AM, Josh Elser wrote:

Yep, unix timestamp in the `number_value` attribute should be correct.

If you have more questions about how to serialize TypedValue's before 
I get some more docs in place, this method[1] should help. This was a 
recent consolidation which should be pretty easy to follow (even 
without knowing Java).


Thanks for creating the JIRA issue. I'll go ahead and give you the 
karma to have a bit more control in assigning issues in the CALCITE 
project while I'm at it.


[1] 
https://github.com/apache/calcite/blob/master/avatica/core/src/main/java/org/apache/calcite/avatica/remote/TypedValue.java#L480


F21 wrote:

Sure thing! In the mean time, can you answer my question regarding
whether setting the type to JAVA_SQL_TIMESTAMP and setting number_value
to the unix timestamp is correct?

On 7/04/2016 8:36 AM, Josh Elser wrote:

Ok. You got me thinking that it would be good to document how each
support type in TypedValue is serialized into that message (as it
isn't necessarily obvious how the code expects it).

Want to file a JIRA issue and assign it to me?

F21 wrote:

Hey Josh,

That was a great explanation, thanks! And yes, I am using 
protobufs. :)

So in the case of a date time, should I set the Rep to
JAVA_SQL_TIMESTAMP and set the number_value field to the unix 
timestamp

equivalent of the datetime?

I am not familiar with Java, but it would also be nice if the docs 
could

include the format JAVA_SQL_* with an example.

Thanks again!

On 7/04/2016 12:37 AM, Josh Elser wrote:

Also, if you have suggestions on how you'd like to see the
documentation expanded, please do provide them. I can try to expand,
but if I don't have a focus on what is actually lacking, it's hard to
be effective.

Josh Elser wrote:
If you're using Protobuf (as I think you are), you don't need to 
worry

about the conversion from the Protobuf TypedValue message back into
the
Avatica class TypedValue. This is handled implicitly by Avatica
itself.
Just make sure that the Rep you provide matches the serialization.

I'm not familiar with golang's SQL interface, so I'm not sure what
they
define as a "datetime" here. If you have more specifics, I can 
try to

point you in the right direction.

AFAIK, there isn't any difference in implementation between
PRIMITIVE_FOO and FOO (there are a few variants of this).

The difference between Long and BigInteger would be the resulting 
Java
type created for the value (a Long or a BigInteger). Sorry if 
this is

cyclic logic :)

Yes, the JAVA_* types are used to support the array of
date/time/datetime data types.

F21 wrote:
I need to send some TypedValues to the avatica server (phoenix 
query

server) when executing a statement.

According to
https://calcite.apache.org/docs/avatica_protobuf_reference.html#typedvalue, 





I need to set a Type for each value. I noticed that the list of 
Reps

here
(https://calcite.apache.org/docs/avatica_protobuf_reference.html#rep) 


support things like JAVA_SQL_TIME, JAVA_SQL_TIMESTAMP etc, however
it's
unclear which ones are valid values for a TypedValue.

In my case, the golang sql interface provides data for parameter
binding
that might be a time.time (which is essentially a datetime). In 
this

case, what should my TypedValue look like?

Also, I noticed the Rep enum has a few things that looked similar,
but
might mean different things. It would be nice to have
documentation to
clarify. For example:
- What's the difference between PRIMITIVE_BOOLEAN and BOOLEAN?
- Is there any difference between LONG and BIG_INTEGER?
- Are the JAVA_SQL_* and JAVA_UTIL_* types currently being used?

Thanks!











Re: Does avatica support a date type?

2016-04-06 Thread F21

Hey Josh,

That was a great explanation, thanks! And yes, I am using protobufs. :) 
So in the case of a date time, should I set the Rep to 
JAVA_SQL_TIMESTAMP and set the number_value field to the unix timestamp 
equivalent of the datetime?


I am not familiar with Java, but it would also be nice if the docs could 
include the format JAVA_SQL_* with an example.


Thanks again!

On 7/04/2016 12:37 AM, Josh Elser wrote:
Also, if you have suggestions on how you'd like to see the 
documentation expanded, please do provide them. I can try to expand, 
but if I don't have a focus on what is actually lacking, it's hard to 
be effective.


Josh Elser wrote:

If you're using Protobuf (as I think you are), you don't need to worry
about the conversion from the Protobuf TypedValue message back into the
Avatica class TypedValue. This is handled implicitly by Avatica itself.
Just make sure that the Rep you provide matches the serialization.

I'm not familiar with golang's SQL interface, so I'm not sure what they
define as a "datetime" here. If you have more specifics, I can try to
point you in the right direction.

AFAIK, there isn't any difference in implementation between
PRIMITIVE_FOO and FOO (there are a few variants of this).

The difference between Long and BigInteger would be the resulting Java
type created for the value (a Long or a BigInteger). Sorry if this is
cyclic logic :)

Yes, the JAVA_* types are used to support the array of
date/time/datetime data types.

F21 wrote:

I need to send some TypedValues to the avatica server (phoenix query
server) when executing a statement.

According to
https://calcite.apache.org/docs/avatica_protobuf_reference.html#typedvalue, 



I need to set a Type for each value. I noticed that the list of Reps
here
(https://calcite.apache.org/docs/avatica_protobuf_reference.html#rep)
support things like JAVA_SQL_TIME, JAVA_SQL_TIMESTAMP etc, however it's
unclear which ones are valid values for a TypedValue.

In my case, the golang sql interface provides data for parameter 
binding

that might be a time.time (which is essentially a datetime). In this
case, what should my TypedValue look like?

Also, I noticed the Rep enum has a few things that looked similar, but
might mean different things. It would be nice to have documentation to
clarify. For example:
- What's the difference between PRIMITIVE_BOOLEAN and BOOLEAN?
- Is there any difference between LONG and BIG_INTEGER?
- Are the JAVA_SQL_* and JAVA_UTIL_* types currently being used?

Thanks!






Re: Does avatica support a date type?

2016-04-06 Thread F21
Sure thing! In the mean time, can you answer my question regarding 
whether setting the type to JAVA_SQL_TIMESTAMP and setting number_value 
to the unix timestamp is correct?


On 7/04/2016 8:36 AM, Josh Elser wrote:
Ok. You got me thinking that it would be good to document how each 
support type in TypedValue is serialized into that message (as it 
isn't necessarily obvious how the code expects it).


Want to file a JIRA issue and assign it to me?

F21 wrote:

Hey Josh,

That was a great explanation, thanks! And yes, I am using protobufs. :)
So in the case of a date time, should I set the Rep to
JAVA_SQL_TIMESTAMP and set the number_value field to the unix timestamp
equivalent of the datetime?

I am not familiar with Java, but it would also be nice if the docs could
include the format JAVA_SQL_* with an example.

Thanks again!

On 7/04/2016 12:37 AM, Josh Elser wrote:

Also, if you have suggestions on how you'd like to see the
documentation expanded, please do provide them. I can try to expand,
but if I don't have a focus on what is actually lacking, it's hard to
be effective.

Josh Elser wrote:

If you're using Protobuf (as I think you are), you don't need to worry
about the conversion from the Protobuf TypedValue message back into 
the
Avatica class TypedValue. This is handled implicitly by Avatica 
itself.

Just make sure that the Rep you provide matches the serialization.

I'm not familiar with golang's SQL interface, so I'm not sure what 
they

define as a "datetime" here. If you have more specifics, I can try to
point you in the right direction.

AFAIK, there isn't any difference in implementation between
PRIMITIVE_FOO and FOO (there are a few variants of this).

The difference between Long and BigInteger would be the resulting Java
type created for the value (a Long or a BigInteger). Sorry if this is
cyclic logic :)

Yes, the JAVA_* types are used to support the array of
date/time/datetime data types.

F21 wrote:

I need to send some TypedValues to the avatica server (phoenix query
server) when executing a statement.

According to
https://calcite.apache.org/docs/avatica_protobuf_reference.html#typedvalue, 




I need to set a Type for each value. I noticed that the list of Reps
here
(https://calcite.apache.org/docs/avatica_protobuf_reference.html#rep)
support things like JAVA_SQL_TIME, JAVA_SQL_TIMESTAMP etc, however 
it's

unclear which ones are valid values for a TypedValue.

In my case, the golang sql interface provides data for parameter
binding
that might be a time.time (which is essentially a datetime). In this
case, what should my TypedValue look like?

Also, I noticed the Rep enum has a few things that looked similar, 
but
might mean different things. It would be nice to have 
documentation to

clarify. For example:
- What's the difference between PRIMITIVE_BOOLEAN and BOOLEAN?
- Is there any difference between LONG and BIG_INTEGER?
- Are the JAVA_SQL_* and JAVA_UTIL_* types currently being used?

Thanks!









Re: Does avatica support a date type?

2016-04-06 Thread F21
@ Josh I just created CALCITE-1192 but was unable to assign someone to 
it, so you might have to assign yourself.


Thanks again!

On 7/04/2016 8:36 AM, Josh Elser wrote:
Ok. You got me thinking that it would be good to document how each 
support type in TypedValue is serialized into that message (as it 
isn't necessarily obvious how the code expects it).


Want to file a JIRA issue and assign it to me?

F21 wrote:

Hey Josh,

That was a great explanation, thanks! And yes, I am using protobufs. :)
So in the case of a date time, should I set the Rep to
JAVA_SQL_TIMESTAMP and set the number_value field to the unix timestamp
equivalent of the datetime?

I am not familiar with Java, but it would also be nice if the docs could
include the format JAVA_SQL_* with an example.

Thanks again!

On 7/04/2016 12:37 AM, Josh Elser wrote:

Also, if you have suggestions on how you'd like to see the
documentation expanded, please do provide them. I can try to expand,
but if I don't have a focus on what is actually lacking, it's hard to
be effective.

Josh Elser wrote:

If you're using Protobuf (as I think you are), you don't need to worry
about the conversion from the Protobuf TypedValue message back into 
the
Avatica class TypedValue. This is handled implicitly by Avatica 
itself.

Just make sure that the Rep you provide matches the serialization.

I'm not familiar with golang's SQL interface, so I'm not sure what 
they

define as a "datetime" here. If you have more specifics, I can try to
point you in the right direction.

AFAIK, there isn't any difference in implementation between
PRIMITIVE_FOO and FOO (there are a few variants of this).

The difference between Long and BigInteger would be the resulting Java
type created for the value (a Long or a BigInteger). Sorry if this is
cyclic logic :)

Yes, the JAVA_* types are used to support the array of
date/time/datetime data types.

F21 wrote:

I need to send some TypedValues to the avatica server (phoenix query
server) when executing a statement.

According to
https://calcite.apache.org/docs/avatica_protobuf_reference.html#typedvalue, 




I need to set a Type for each value. I noticed that the list of Reps
here
(https://calcite.apache.org/docs/avatica_protobuf_reference.html#rep)
support things like JAVA_SQL_TIME, JAVA_SQL_TIMESTAMP etc, however 
it's

unclear which ones are valid values for a TypedValue.

In my case, the golang sql interface provides data for parameter
binding
that might be a time.time (which is essentially a datetime). In this
case, what should my TypedValue look like?

Also, I noticed the Rep enum has a few things that looked similar, 
but
might mean different things. It would be nice to have 
documentation to

clarify. For example:
- What's the difference between PRIMITIVE_BOOLEAN and BOOLEAN?
- Is there any difference between LONG and BIG_INTEGER?
- Are the JAVA_SQL_* and JAVA_UTIL_* types currently being used?

Thanks!









Is there a list of valid properties for avatica's OpenConnectionRequest info map?

2016-04-11 Thread F21
The protobuf documentation for avatica says we can send a map along with 
the connection id when using an OpenConnectionRequest: 
https://calcite.apache.org/docs/avatica_protobuf_reference.html#openconnectionrequest


Is there a list of keys that can go into the map? Perhaps this should be 
better documented.


I tried sending a map (shown in JSON, but I was sending the golang 
protobuf equivalent):

{
"auto_commit":   "true",
"has_auto_commit":   "true",
"transaction_isolation": "8",
}

However, the connection was not opened with auto_commit set to true.



Re: Is there a list of valid properties for avatica's OpenConnectionRequest info map?

2016-04-12 Thread F21

Thanks, James! That was very helpful.

For those interested, the PhoenixRuntime.java file is here: 
https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java


In my case, I used (JSON equivalent):

{
"AutoCommit":  "true",
"Consistency": "8",
}



On 12/04/2016 4:29 PM, James Taylor wrote:

This map corresponds to the properties in the DriverManager.getConnection()
call [1]. The possible values are dependent on the particular driver with
which you're interacting. For Phoenix, these can be HBase connection
property/values (if a connection being opened needs to have different
values than the default configuration), or in support of other features
such as multi-tenancy (TenantId), the max time stamp for flashback queries
(CurrentSCN), and a few other not too well documented options (see
PhoenixRuntime.java).

Thanks,
James


[1]
https://docs.oracle.com/javase/7/docs/api/java/sql/DriverManager.html#getConnection(java.lang.String,%20java.util.Properties)

On Mon, Apr 11, 2016 at 10:46 PM, F21 <f21.gro...@gmail.com> wrote:


The protobuf documentation for avatica says we can send a map along with
the connection id when using an OpenConnectionRequest:
https://calcite.apache.org/docs/avatica_protobuf_reference.html#openconnectionrequest

Is there a list of keys that can go into the map? Perhaps this should be
better documented.

I tried sending a map (shown in JSON, but I was sending the golang
protobuf equivalent):
{
 "auto_commit":   "true",
 "has_auto_commit":   "true",
 "transaction_isolation": "8",
}

However, the connection was not opened with auto_commit set to true.






Does avatica support a date type?

2016-04-05 Thread F21
I need to send some TypedValues to the avatica server (phoenix query 
server) when executing a statement.


According to 
https://calcite.apache.org/docs/avatica_protobuf_reference.html#typedvalue, 
I need to set a Type for each value. I noticed that the list of Reps 
here 
(https://calcite.apache.org/docs/avatica_protobuf_reference.html#rep) 
support things like JAVA_SQL_TIME, JAVA_SQL_TIMESTAMP etc, however it's 
unclear which ones are valid values for a TypedValue.


In my case, the golang sql interface provides data for parameter binding 
that might be a time.time (which is essentially a datetime). In this 
case, what should my TypedValue look like?


Also, I noticed the Rep enum has a few things that looked similar, but 
might mean different things. It would be nice to have documentation to 
clarify. For example:

- What's the difference between PRIMITIVE_BOOLEAN and BOOLEAN?
- Is there any difference between LONG and BIG_INTEGER?
- Are the JAVA_SQL_* and JAVA_UTIL_* types currently being used?

Thanks!




Re: Starting a transaction with avatica and a few other things

2016-03-27 Thread F21

Hi James,

Thanks for the quick reply. The docs does talk about how to use 
transactions with phoenix, but doesn't seem to answer my questions 
regarding implementing transactions for a phoenix query server client.


Cheers!

On 27/03/2016 5:04 PM, James Taylor wrote:

Please read https://phoenix.apache.org/transactions.html and let us know if
it doesn't answer your questions.

Thanks,
James

On Sat, Mar 26, 2016 at 10:00 PM, F21 <f21.gro...@gmail.com> wrote:


Hi guys,

As I posted on the Phoenix list a few days ago, I am working on a golang
client for the phoenix query service (which uses avatica).

In regards to starting a transaction, I see that the protobuf reference
contains Commit and Rollback requests, but there isn't any Begin request.

Is sending a ConnectionSync request and setting autoCommit to false the
correct way to start a transaction?

Also, what is the state of autoCommit when I send an Open request to the
server?

Finally, the Open request allows me to send a map called info. What is
suppose to go into this map?

Cheers!





Re: Starting a transaction with avatica and a few other things

2016-04-03 Thread F21

Hey Josh,

Thanks for your examples, those are pretty useful. I have one question 
about offsets. In your example, you started with an offset of 0 and 
added to the offset as you fetched more rows from the server. Does the 
first frame from a prepareAndExecute result always have an offset of 0? 
From my limited testing, it appears the offsets returned by the server 
never increases.


Thanks!

On 29/03/2016 1:43 AM, Josh Elser wrote:
If you're still trying to wrap your head around how to interact with 
the Avatica server (PQS in your case), 
https://issues.apache.org/jira/browse/CALCITE-1081 might be of some help.


Specifically, I tried to outline what kind of requests you might send 
for some basic operations 
http://people.apache.org/~elserj/calcite/docs/avatica_example_client.html


LMK if these are helpful (or not) and what kind of additional 
documentation/instructions would be useful.


James Taylor wrote:

That documentation is still the same. Going through the query server
doesn't change anything. The important quote there about starting a
transaction:

A transaction is started implicitly through the execution of a 
statement on
a transactional table and then finished through either a commit or 
rollback.


On Sat, Mar 26, 2016 at 11:08 PM, F21<f21.gro...@gmail.com>  wrote:


Hi James,

Thanks for the quick reply. The docs does talk about how to use
transactions with phoenix, but doesn't seem to answer my questions
regarding implementing transactions for a phoenix query server client.

Cheers!


On 27/03/2016 5:04 PM, James Taylor wrote:

Please read https://phoenix.apache.org/transactions.html and let us 
know

if
it doesn't answer your questions.

Thanks,
James

On Sat, Mar 26, 2016 at 10:00 PM, F21<f21.gro...@gmail.com>  wrote:

Hi guys,
As I posted on the Phoenix list a few days ago, I am working on a 
golang

client for the phoenix query service (which uses avatica).

In regards to starting a transaction, I see that the protobuf 
reference
contains Commit and Rollback requests, but there isn't any Begin 
request.


Is sending a ConnectionSync request and setting autoCommit to 
false the

correct way to start a transaction?

Also, what is the state of autoCommit when I send an Open request 
to the

server?

Finally, the Open request allows me to send a map called info. 
What is

suppose to go into this map?

Cheers!








Re: Starting a transaction with avatica and a few other things

2016-04-04 Thread F21
Ah, makes sense. Also,should the offset be incremented on a per frame or 
a per row basis? Regarding the offsets being 0, I opened CALCITE-1181, 
so hopefully we can get that sorted.


On 4/04/2016 11:57 PM, Josh Elser wrote:
Hrm, I'm not sure off the top of my head how the offset returned by 
the server operates. If it's not returning the offset for the current 
"batch" of results, that's an accidental omission (but a client can 
easily track the offset, which would explain how the omission happened 
in the first place).


F21 wrote:

Hey Josh,

Thanks for your examples, those are pretty useful. I have one question
about offsets. In your example, you started with an offset of 0 and
added to the offset as you fetched more rows from the server. Does the
first frame from a prepareAndExecute result always have an offset of 0?
 From my limited testing, it appears the offsets returned by the server
never increases.

Thanks!

On 29/03/2016 1:43 AM, Josh Elser wrote:

If you're still trying to wrap your head around how to interact with
the Avatica server (PQS in your case),
https://issues.apache.org/jira/browse/CALCITE-1081 might be of some 
help.


Specifically, I tried to outline what kind of requests you might send
for some basic operations
http://people.apache.org/~elserj/calcite/docs/avatica_example_client.html 



LMK if these are helpful (or not) and what kind of additional
documentation/instructions would be useful.

James Taylor wrote:

That documentation is still the same. Going through the query server
doesn't change anything. The important quote there about starting a
transaction:

A transaction is started implicitly through the execution of a
statement on
a transactional table and then finished through either a commit or
rollback.

On Sat, Mar 26, 2016 at 11:08 PM, F21<f21.gro...@gmail.com> wrote:


Hi James,

Thanks for the quick reply. The docs does talk about how to use
transactions with phoenix, but doesn't seem to answer my questions
regarding implementing transactions for a phoenix query server 
client.


Cheers!


On 27/03/2016 5:04 PM, James Taylor wrote:


Please read https://phoenix.apache.org/transactions.html and let us
know
if
it doesn't answer your questions.

Thanks,
James

On Sat, Mar 26, 2016 at 10:00 PM, F21<f21.gro...@gmail.com> wrote:

Hi guys,

As I posted on the Phoenix list a few days ago, I am working on a
golang
client for the phoenix query service (which uses avatica).

In regards to starting a transaction, I see that the protobuf
reference
contains Commit and Rollback requests, but there isn't any Begin
request.

Is sending a ConnectionSync request and setting autoCommit to
false the
correct way to start a transaction?

Also, what is the state of autoCommit when I send an Open request
to the
server?

Finally, the Open request allows me to send a map called info.
What is
suppose to go into this map?

Cheers!










Re: Starting a transaction with avatica and a few other things

2016-04-04 Thread F21

I think I am misunderstanding your comment from PHOENIX-1181.

 - If I set maxRowCount to 100, and the query returns 250 entries, I 
should get 1 frame containing 100 rows back with the option to fetch 
more rows (with the amount of rows subject to the maxRowCount of the 
fetch request), right?


- A executeResponse (from a PrepareAndExecuteRequest) can contain 
multiple resultSets witch each resultSet containing 1 frame. Under what 
circumstances will you get multiple resultSets in an executeResponse? 
From my limited testing, I always get 1 resultSet in my executeResponses.


Thanks again! :)

On 5/04/2016 8:54 AM, Josh Elser wrote:

Oh, cool. Didn't realize that was you :). Much appreciated.

I'm not sure I understand your question. A Frame is a collection of 
rows (in the context of a SELECT, anyways). In the traditional JDBC 
sense, a ResultSet is backed by many Frames.


The common case is the following. Given some batch (frame) size of 100 
entries, you would see the following for a query returning 250 entries:


PrepareAndExecute(maxRowCount=100) => Frame[0-100, done=false],
Fetch(offset=100, maxRowCount=100) => Frame[100-200, done=false],
Fetch(offset=200, maxRowCount=100) => Frame[200-250, done=true].

Does that make sense?

F21 wrote:

Ah, makes sense. Also,should the offset be incremented on a per frame or
a per row basis? Regarding the offsets being 0, I opened CALCITE-1181,
so hopefully we can get that sorted.

On 4/04/2016 11:57 PM, Josh Elser wrote:

Hrm, I'm not sure off the top of my head how the offset returned by
the server operates. If it's not returning the offset for the current
"batch" of results, that's an accidental omission (but a client can
easily track the offset, which would explain how the omission happened
in the first place).

F21 wrote:

Hey Josh,

Thanks for your examples, those are pretty useful. I have one question
about offsets. In your example, you started with an offset of 0 and
added to the offset as you fetched more rows from the server. Does the
first frame from a prepareAndExecute result always have an offset 
of 0?

From my limited testing, it appears the offsets returned by the server
never increases.

Thanks!

On 29/03/2016 1:43 AM, Josh Elser wrote:

If you're still trying to wrap your head around how to interact with
the Avatica server (PQS in your case),
https://issues.apache.org/jira/browse/CALCITE-1081 might be of some
help.

Specifically, I tried to outline what kind of requests you might send
for some basic operations
http://people.apache.org/~elserj/calcite/docs/avatica_example_client.html 




LMK if these are helpful (or not) and what kind of additional
documentation/instructions would be useful.

James Taylor wrote:

That documentation is still the same. Going through the query server
doesn't change anything. The important quote there about starting a
transaction:

A transaction is started implicitly through the execution of a
statement on
a transactional table and then finished through either a commit or
rollback.

On Sat, Mar 26, 2016 at 11:08 PM, F21<f21.gro...@gmail.com> wrote:


Hi James,

Thanks for the quick reply. The docs does talk about how to use
transactions with phoenix, but doesn't seem to answer my questions
regarding implementing transactions for a phoenix query server
client.

Cheers!


On 27/03/2016 5:04 PM, James Taylor wrote:

Please read https://phoenix.apache.org/transactions.html and 
let us

know
if
it doesn't answer your questions.

Thanks,
James

On Sat, Mar 26, 2016 at 10:00 PM, F21<f21.gro...@gmail.com> wrote:

Hi guys,

As I posted on the Phoenix list a few days ago, I am working on a
golang
client for the phoenix query service (which uses avatica).

In regards to starting a transaction, I see that the protobuf
reference
contains Commit and Rollback requests, but there isn't any Begin
request.

Is sending a ConnectionSync request and setting autoCommit to
false the
correct way to start a transaction?

Also, what is the state of autoCommit when I send an Open request
to the
server?

Finally, the Open request allows me to send a map called info.
What is
suppose to go into this map?

Cheers!












How do I send binary data to avatica?

2016-04-13 Thread F21
As mentioned on the list, I am currently working on  a golang 
client/driver for avatica using protobufs for serialization.


I've got all datatypes working, except for BINARY and VARBINARY.

For my test table looks like this:
CREATE TABLE test (int INTEGER PRIMARY KEY, bin BINARY(20), varbin 
VARBINARY) TRANSACTIONAL=false


In go, we have a datatype called a slice of bytes ([]byte) which is 
essentially an array of bytes (8-bits each).


When I generated the golang protobufs using the .proto files, this is 
the definition of TypedValue:

type TypedValue struct {
TypeRep`protobuf:"varint,1,opt,name=type,enum=Rep" 
json:"type,omitempty"`
BoolValue   bool 
`protobuf:"varint,2,opt,name=bool_value,json=boolValue" 
json:"bool_value,omitempty"`
StringValue string 
`protobuf:"bytes,3,opt,name=string_value,json=stringValue" 
json:"string_value,omitempty"`
NumberValue int64 
`protobuf:"zigzag64,4,opt,name=number_value,json=numberValue" 
json:"number_value,omitempty"`
BytesValues []byte 
`protobuf:"bytes,5,opt,name=bytes_values,json=bytesValues,proto3" 
json:"bytes_values,omitempty"`
DoubleValue float64 
`protobuf:"fixed64,6,opt,name=double_value,json=doubleValue" 
json:"double_value,omitempty"`
Nullbool`protobuf:"varint,7,opt,name=null" 
json:"null,omitempty"`

}

I am currently creating a TypedValue that looks like this when sending 
binary data:

{BYTE_STRING false  0 [116 101 115 116] 0 false}

So, the Rep is set to BYTE_STRING and ByteValues is populated with the 
string "test" in bytes (it's shown here as decimal because I printed it).


The problem is that even though the insert executes properly, if I look 
at the row using SquirrelSQL, both the BINARY and VARBINARY columns are 
.


Is BYTE_STRING the correct rep type for sending binary data? Do I also 
have to encode my bytes in a special format?


Thanks!


Re: Golang driver for Phoenix and Avatica available

2016-05-17 Thread F21

Thanks for opening the issue on JIRA, Julian.

Let me know if there's anything I can do to speed up the process. Will 
Avatica be spun out as its own project?


Francis

On 18/05/2016 1:06 PM, Julian Hyde wrote:

It sounds as if there is general approval for this. I have logged 
https://issues.apache.org/jira/browse/CALCITE-1240 
<https://issues.apache.org/jira/browse/CALCITE-1240> to track.

Julian


On May 17, 2016, at 8:00 PM, Josh Elser <josh.el...@gmail.com> wrote:

Big +1 from me.

I think if you're amenable to it, Francis, I'm more than willing to help make this a 
"formal" part of Avatica!

Congrats and great work on what you have done already!

F21 wrote:

That would be really great! I think that would help make a lot of
Phoenix drivers currently available to support avatica generically. It
would also reduce the burden of driver maintainers maintaining a list of
errors.

On 18/05/2016 3:48 AM, Julian Hyde wrote:

Would it help if we added a function to Avatica’s API, so a client
could ask for that map when connecting? It would allow the driver to
work against multiple servers, and in the phoenix-only case it would
mean that you would’t have to upgrade the client driver if you
upgraded the phoenix server.

(We’re still talking hypotheticals. I would like to hear more of a
consensus from the community before we include this in Avatica.)

Julian



On May 16, 2016, at 10:57 PM, F21 <f21.gro...@gmail.com> wrote:

Hey Julian,

The code should be useful for avatica in general. The only phoenix
specific bit is the map of Phoenix error codes here:
https://github.com/Boostport/avatica/blob/master/errors.go#L77

I think other database backends can have their own maps. It might
also be nice to be able to interrogate the avatica server to see if
the backend is Phoenix or some other database, and the switch the
errors map accordingly.

Francis

On 17/05/2016 3:54 PM, Julian Hyde wrote:

Excellent! Thanks for doing this.

I haven't yet looked at the code to see how much is specific to
Phoenix and whether it would work against any Avatica provider. If
it is generic, and if you are amenable, there might be a place for
it in the Avatica project.

What do others think?

Julian


On May 16, 2016, at 10:43 PM, F21 <f21.gro...@gmail.com> wrote:

Hi all,

I have just open sourced a golang driver for Phoenix and Avatica.

The code is licensed using the Apache 2 License and is available
here: https://github.com/Boostport/avatica

Contributions are very welcomed :)

Cheers,

Francis







Golang driver for Phoenix and Avatica available

2016-05-16 Thread F21

Hi all,

I have just open sourced a golang driver for Phoenix and Avatica.

The code is licensed using the Apache 2 License and is available here: 
https://github.com/Boostport/avatica


Contributions are very welcomed :)

Cheers,

Francis



Re: Avatica error codes

2016-05-13 Thread F21

On 13/05/2016 5:30 PM, Julian Hyde wrote:

Avatica’s ErrorResponse currently has the same information as JDBC (String 
sqlState, int errorCode, String errorMessage). I don’t think it’s wise to add 
an errorName field, because it would be difficult to propagate it with a JDBC 
SQLException.

If the error being thrown has a sqlState, then the sqlState code is sufficient. 
You can look up the name, e.g.

   SqlState.BY_CODE.get(“01004”).name()

evaluates to “WARNING_STRING_DATA_RIGHT_TRUNCATION”.

If the error being thrown is not a standard sqlState, I don’t think there is a 
problem putting the error name in the sqlState field. SQLException doesn’t 
check whether the code is valid, or even that it is 5 chars long.

Julian


Ah that makes sense with the ErrorResponse being tied to SQLException. 
Unfortunately, as a client of the avatica server, I am unable to call 
`SqlState.BY_CODE.get(“01004”).name()` and will still need to maintain 
my own map of error codes to the exception name.


Maybe it's possible to get an AST representation of 
https://github.com/apache/phoenix/blob/a0504fba12363eaa27ea3fd224671e92cb11a468/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java 
to manipulate it into a language agnostic format, but I think that's 
over engineering things for now.


The best solution is probably to maintain my own map and get a diff of 
SQLExceptionCode.java on every release and add/remove error codes.




Avatica error codes

2016-05-05 Thread F21
I recently found more time to work on the golang driver for 
avatica/phoenix query server.


One of the things I want to do is to be able to trap transactional 
conflicts. Here's one of them:


exceptions:"java.lang.RuntimeException: java.sql.SQLException: ERROR 523 
(42900): Transaction aborted due to conflict with other mutations. 
Conflict detected for transaction 146243993640900.\n\tat..." 
error_code:4294967295 sql_state:"0" 
metadata:


I noticed that the sql_state is usually 0 as per 
https://github.com/apache/calcite/blob/5e1cc5464413904466c357766843cd491b23f646/avatica/core/src/main/java/org/apache/calcite/avatica/remote/Service.java#L2236


However, I am not sure what 4294967295 means as I cannot find it if I 
grep the calcite or phoenix code base. Interestingly, 4294967295 is the 
maximum value of an unsigned_int. If I cause other errors such as 
PhoenixParserException, the error code is also set to 4294967295.


Is there any reason why the sql_state and error_code  from phoenix 
(42900 and 523) is not being used in the error response?





Re: How do I send binary data to avatica?

2016-04-14 Thread F21

Hey Josh,

Here's what I am doing:

Create the table: CREATE TABLE test ( int INTEGER PRIMARY KEY, bin 
VARBINARY) TRANSACTIONAL=false
Download the binary file we want to insert from here: 
https://raw.githubusercontent.com/golang-samples/gopher-vector/master/gopher.png


Prepare the statement: UPSERT INTO test VALUES (?, ?)

Here's the code I am using to execute the statement:

// read the file into a byte array []byte
file, _ := ioutil.ReadFile(filePath)

// create array of typed values containing the parameters
parameters := []*message.TypedValue{
   {
Type: message.Rep_LONG,
NumberValue: 1,
},
   {
Type: message.Rep_BYTE_STRING,
BytesValues: file,
},
}

// create message:
msg := {
StatementHandle:, // this is just the statement handle
ParameterValues:parameters,
MaxRowCount:100,
HasParameterValues: true,
}

// encode the message:
wrapped, _ := proto.Marshal(message)

// wrap it in the wire message:
wire := {
Name:   " org.apache.calcite.avatica.proto.Requests$ 
ExecuteRequest",

WrappedMessage: wrapped,
}

// encode wire message and send over http:
body, _ := proto.Marshal(wire)
response, _ := ctxhttp.Post(context.Background(), httpClient, 
avaticaServer, "application/x-google-protobuf", bytes.NewReader(body))


Let me know if there's more information I can provide or if something's 
unclear :)


On 15/04/2016 2:46 AM, Josh Elser wrote:

Yeah, that sounds right to me too.

I think we have a test for random bytes. Maybe there's something weird 
happening under the hood in the Avatica JDBC driver that isn't obvious 
to you in the Go driver.


Any chance you can share some example code you're running? I can try 
to convert it to a Java test case, maybe help track down your issue.


F21 wrote:

I also tried casting the data to a string and setting it to StringValue
and the Rep type to STRING.

This works when I store and retrieve strings from the binary column, but
doesn't work correctly if I try to store something like a small image.

On 14/04/2016 5:03 PM, Julian Hyde wrote:

BytesValue sounds right. I’m not sure why it isn’t working for you.


On Apr 14, 2016, at 6:34 AM, F21 <f21.gro...@gmail.com> wrote:

As mentioned on the list, I am currently working on a golang
client/driver for avatica using protobufs for serialization.

I've got all datatypes working, except for BINARY and VARBINARY.

For my test table looks like this:
CREATE TABLE test (int INTEGER PRIMARY KEY, bin BINARY(20), varbin
VARBINARY) TRANSACTIONAL=false

In go, we have a datatype called a slice of bytes ([]byte) which is
essentially an array of bytes (8-bits each).

When I generated the golang protobufs using the .proto files, this is
the definition of TypedValue:
type TypedValue struct {
Type Rep `protobuf:"varint,1,opt,name=type,enum=Rep"
json:"type,omitempty"`
BoolValue bool
`protobuf:"varint,2,opt,name=bool_value,json=boolValue"
json:"bool_value,omitempty"`
StringValue string
`protobuf:"bytes,3,opt,name=string_value,json=stringValue"
json:"string_value,omitempty"`
NumberValue int64
`protobuf:"zigzag64,4,opt,name=number_value,json=numberValue"
json:"number_value,omitempty"`
BytesValues []byte
`protobuf:"bytes,5,opt,name=bytes_values,json=bytesValues,proto3"
json:"bytes_values,omitempty"`
DoubleValue float64
`protobuf:"fixed64,6,opt,name=double_value,json=doubleValue"
json:"double_value,omitempty"`
Null bool `protobuf:"varint,7,opt,name=null" json:"null,omitempty"`
}

I am currently creating a TypedValue that looks like this when
sending binary data:
{BYTE_STRING false 0 [116 101 115 116] 0 false}

So, the Rep is set to BYTE_STRING and ByteValues is populated with
the string "test" in bytes (it's shown here as decimal because I
printed it).

The problem is that even though the insert executes properly, if I
look at the row using SquirrelSQL, both the BINARY and VARBINARY
columns are .

Is BYTE_STRING the correct rep type for sending binary data? Do I
also have to encode my bytes in a special format?

Thanks!






Re: How do I send binary data to avatica?

2016-04-16 Thread F21

Hey Josh,

I am just spinning up my docker containers to test using JSON first. You 
have shown the binary data as  but do I need to base64 encode 
them for JSON?


On 17/04/2016 8:17 AM, Josh Elser wrote:
I wrote a simple test case for this with the gopher image. Here's the 
JSON data (but hopefully this is enough to help out). I'd have to 
write more code to dump out the actual protobufs. Let me know if this 
is insufficient to help you figure out what's wrong. My test worked fine.


CREATE TABLE binaryData(id int, data varbinary(262144));

[{"request":"prepare","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","sql":"INSERT 
INTO binaryData values(?,?)","maxRowCount":-1}, 
{"response":"prepare","statement":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":{"columns":[],"sql":"INSERT 
INTO binaryData 
values(?,?)","parameters":[{"signed":true,"precision":10,"scale":0,"parameterType":4,"typeName":"INTEGER","className":"java.lang.Integer","name":"?1"},{"signed":false,"precision":262144,"scale":0,"parameterType":-3,"typeName":"VARBINARY","className":"[B","name":"?2"}],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":"SELECT"}},"rpcMetadata":null}]


[{"request":"execute","statementHandle":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":null},"parameterValues":[{"type":"INTEGER","value":1},{"type":"BYTE_STRING","value":""}],"maxRowCount":100}, 
{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1,"ownStatement":true,"signature":null,"firstFrame":null,"updateCount":1,"rpcMetadata":null}]}] 

[{"request":"closeStatement","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1}, 
{"response":"closeStatement","rpcMetadata":null}]


[{"request":"prepareAndExecute","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"sql":"SELECT 
*

  FROM binaryData","maxRowCount":-1},
{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","oonnectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"ownStatement":true,"signature":{"columns":[{"ordinal":0,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":1,"signed":true,"displaySize":11,"label":"ID","columnName":"ID","schemaName":"SCOTT","precision":32,"scale":0,"tableName":"BINARYDATA","catalogName":"PUBLIC","type":{"type":"scalar","id":4,"name":"INTEGER","rep":"PRIMITIVE_INT"},"readOnly":false,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.Integer"},{"ordinal":1,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":1,"signed":false,"displaySize":262144,"label":"DATA","columnName":"DATA","schemaName":"SCOTT","precision":262144,"scale":0,"tableName":"BINARYDATA","catalogName":"PUBLIC","type":{"type":"scalar","id":-3,"name":"VARBINARY","rep":"BYTE 

_STRING"},"readOnly":false,"writable":false,"definitelyWritable":false,"columnClassName":"[B"}],"sql":null,"parameters":[],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames&q

Re: How do I send binary data to avatica?

2016-04-16 Thread F21

Just reporting back my experiments:

1. Using JSON: I set the serialization of the query server to JSON and 
replayed your requests using CURL. For the binary file, I first base64 
encode it into a string and then sent it. This worked properly and I can 
see data inserted into the table using SquirrelSQL.


2. Using Protobufs: I inserted the binary data using Rep_BYTE_STRING and 
set BytesValues to the byte array read in from the file. It inserts 
(upserts) correctly, but if I query the table using SquirrelSQL, the 
binary column's cell is shown as .


3. Using Protobufs with Base64 encoding: I first encode the binary data 
as base64. I then upsert the parameter as Rep_STRING and set StringValue 
to the base64 encoded string. This upserts correctly and I can see the 
data in SquirrelSQL. I then SELECT the data and base64 decode it and 
write it to a file and generate a hash. The file is written correctly 
and the hash also matches.


So, approach 3 works for me, but it doesn't seem to be the correct way 
to do it.


On 17/04/2016 8:17 AM, Josh Elser wrote:
I wrote a simple test case for this with the gopher image. Here's the 
JSON data (but hopefully this is enough to help out). I'd have to 
write more code to dump out the actual protobufs. Let me know if this 
is insufficient to help you figure out what's wrong. My test worked fine.


CREATE TABLE binaryData(id int, data varbinary(262144));

[{"request":"prepare","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","sql":"INSERT 
INTO binaryData values(?,?)","maxRowCount":-1}, 
{"response":"prepare","statement":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":{"columns":[],"sql":"INSERT 
INTO binaryData 
values(?,?)","parameters":[{"signed":true,"precision":10,"scale":0,"parameterType":4,"typeName":"INTEGER","className":"java.lang.Integer","name":"?1"},{"signed":false,"precision":262144,"scale":0,"parameterType":-3,"typeName":"VARBINARY","className":"[B","name":"?2"}],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":"SELECT"}},"rpcMetadata":null}]


[{"request":"execute","statementHandle":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":null},"parameterValues":[{"type":"INTEGER","value":1},{"type":"BYTE_STRING","value":""}],"maxRowCount":100}, 
{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1,"ownStatement":true,"signature":null,"firstFrame":null,"updateCount":1,"rpcMetadata":null}]}] 

[{"request":"closeStatement","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1}, 
{"response":"closeStatement","rpcMetadata":null}]


[{"request":"prepareAndExecute","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"sql":"SELECT 
*

  FROM binaryData","maxRowCount":-1},
{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","oonnectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"ownStatement":true,"signature":{"columns":[{"ordinal":0,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":1,"signed":true,"displaySize":11,"label":"ID","columnName":"ID","schemaName":"SCOTT","precision":32,"scale":0,"tableName":"BINARYDATA","catalogName":"PUBLIC","type":{"type":"scalar","id":4,"name":"INTEGER","rep":"PRIMITIVE_INT"},"readOnly":false,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.Integer"},{"ordinal":1,"autoIncrement":

Re: How do I send binary data to avatica?

2016-04-17 Thread F21
I just tried using sqlline-thin and it's also showing that the binary 
values are not inserted.


Interestingly, from my experiments, if I send a STRING to update a 
binary column, it works. If I send a BYTE_STRING, it doesn't work.


Would it help if I create a repo on github with containing the encoded 
protobufs for each request so you can see if the protobufs are 
constructed correctly?


On 18/04/2016 3:31 AM, Josh Elser wrote:
Sorry, yes. I just didn't want to have to post the JSON elsewhere -- 
it would've been gross to include in an email. For JSON, it is base64 
encoded. This is not necessary for protobuf (which can natively handle 
raw bytes).


For #2, did you verify that the data made it into the database 
correctly? HBase/Phoenix still, right? Also, maybe this is a 
SquirrelSQL issue? Can you verify the record is present via Phoenix's 
sqlline-thin.py?


For #3, very interesting. I'm not sure why base64 encoding it by hand 
makes any difference for you. Avatica isn't going to be making any 
differentiation in the types of bytes that you send. Bytes are bytes 
are bytes as far as we care.


F21 wrote:

Just reporting back my experiments:

1. Using JSON: I set the serialization of the query server to JSON and
replayed your requests using CURL. For the binary file, I first base64
encode it into a string and then sent it. This worked properly and I can
see data inserted into the table using SquirrelSQL.

2. Using Protobufs: I inserted the binary data using Rep_BYTE_STRING and
set BytesValues to the byte array read in from the file. It inserts
(upserts) correctly, but if I query the table using SquirrelSQL, the
binary column's cell is shown as .

3. Using Protobufs with Base64 encoding: I first encode the binary data
as base64. I then upsert the parameter as Rep_STRING and set StringValue
to the base64 encoded string. This upserts correctly and I can see the
data in SquirrelSQL. I then SELECT the data and base64 decode it and
write it to a file and generate a hash. The file is written correctly
and the hash also matches.

So, approach 3 works for me, but it doesn't seem to be the correct way
to do it.

On 17/04/2016 8:17 AM, Josh Elser wrote:

I wrote a simple test case for this with the gopher image. Here's the
JSON data (but hopefully this is enough to help out). I'd have to
write more code to dump out the actual protobufs. Let me know if this
is insufficient to help you figure out what's wrong. My test worked 
fine.


CREATE TABLE binaryData(id int, data varbinary(262144));

[{"request":"prepare","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","sql":"INSERT 


INTO binaryData values(?,?)","maxRowCount":-1},
{"response":"prepare","statement":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":{"columns":[],"sql":"INSERT 


INTO binaryData
values(?,?)","parameters":[{"signed":true,"precision":10,"scale":0,"parameterType":4,"typeName":"INTEGER","className":"java.lang.Integer","name":"?1"},{"signed":false,"precision":262144,"scale":0,"parameterType":-3,"typeName":"VARBINARY","className":"[B","name":"?2"}],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":"SELECT"}},"rpcMetadata":null}] 




[{"request":"execute","statementHandle":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":null},"parameterValues":[{"type":"INTEGER","value":1},{"type":"BYTE_STRING","value":""}],"maxRowCount":100}, 

{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1,"ownStatement":true,"signature":null,"firstFrame":null,"updateCount":1,"rpcMetadata":null}]}] 



[{"request":"closeStatement","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1}, 


{"response":"closeStatement","rpcMetadata":null}]

[{"request":"prepareAndExecute","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"sql":"SELECT 


*
FROM binaryData","maxRowCount":-1},
{"response":"execute

Re: How do I send binary data to avatica?

2016-04-17 Thread F21
I have now created a repo on github containing all the binary protobuf 
requests and responses to insert binary data into a VARBINARY column. 
The repo is available here: https://github.com/F21/avatica-binary-protobufs


The file 8-request is of most interest because that's the one where we 
are sending the ExecuteRequest to do the actual insertion of the image.


I am using the protoc tool to decode the messages, e.g.:  protoc 
--decode_raw < 8-request


Let me know if this helps :)

On 18/04/2016 3:31 AM, Josh Elser wrote:
Sorry, yes. I just didn't want to have to post the JSON elsewhere -- 
it would've been gross to include in an email. For JSON, it is base64 
encoded. This is not necessary for protobuf (which can natively handle 
raw bytes).


For #2, did you verify that the data made it into the database 
correctly? HBase/Phoenix still, right? Also, maybe this is a 
SquirrelSQL issue? Can you verify the record is present via Phoenix's 
sqlline-thin.py?


For #3, very interesting. I'm not sure why base64 encoding it by hand 
makes any difference for you. Avatica isn't going to be making any 
differentiation in the types of bytes that you send. Bytes are bytes 
are bytes as far as we care.


F21 wrote:

Just reporting back my experiments:

1. Using JSON: I set the serialization of the query server to JSON and
replayed your requests using CURL. For the binary file, I first base64
encode it into a string and then sent it. This worked properly and I can
see data inserted into the table using SquirrelSQL.

2. Using Protobufs: I inserted the binary data using Rep_BYTE_STRING and
set BytesValues to the byte array read in from the file. It inserts
(upserts) correctly, but if I query the table using SquirrelSQL, the
binary column's cell is shown as .

3. Using Protobufs with Base64 encoding: I first encode the binary data
as base64. I then upsert the parameter as Rep_STRING and set StringValue
to the base64 encoded string. This upserts correctly and I can see the
data in SquirrelSQL. I then SELECT the data and base64 decode it and
write it to a file and generate a hash. The file is written correctly
and the hash also matches.

So, approach 3 works for me, but it doesn't seem to be the correct way
to do it.

On 17/04/2016 8:17 AM, Josh Elser wrote:

I wrote a simple test case for this with the gopher image. Here's the
JSON data (but hopefully this is enough to help out). I'd have to
write more code to dump out the actual protobufs. Let me know if this
is insufficient to help you figure out what's wrong. My test worked 
fine.


CREATE TABLE binaryData(id int, data varbinary(262144));

[{"request":"prepare","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","sql":"INSERT 


INTO binaryData values(?,?)","maxRowCount":-1},
{"response":"prepare","statement":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":{"columns":[],"sql":"INSERT 


INTO binaryData
values(?,?)","parameters":[{"signed":true,"precision":10,"scale":0,"parameterType":4,"typeName":"INTEGER","className":"java.lang.Integer","name":"?1"},{"signed":false,"precision":262144,"scale":0,"parameterType":-3,"typeName":"VARBINARY","className":"[B","name":"?2"}],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":"SELECT"}},"rpcMetadata":null}] 




[{"request":"execute","statementHandle":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":null},"parameterValues":[{"type":"INTEGER","value":1},{"type":"BYTE_STRING","value":""}],"maxRowCount":100}, 

{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1,"ownStatement":true,"signature":null,"firstFrame":null,"updateCount":1,"rpcMetadata":null}]}] 



[{"request":"closeStatement","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1}, 


{"response":"closeStatement","rpcMetadata":null}]

[{"request":"prepareAndExecute","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"sql":"SELECT

Re: How do I send binary data to avatica?

2016-04-17 Thread F21

Forgot to add: Yes, still using Hbase 1.1.4 with phoenix 4.7.0.

On 18/04/2016 3:31 AM, Josh Elser wrote:
Sorry, yes. I just didn't want to have to post the JSON elsewhere -- 
it would've been gross to include in an email. For JSON, it is base64 
encoded. This is not necessary for protobuf (which can natively handle 
raw bytes).


For #2, did you verify that the data made it into the database 
correctly? HBase/Phoenix still, right? Also, maybe this is a 
SquirrelSQL issue? Can you verify the record is present via Phoenix's 
sqlline-thin.py?


For #3, very interesting. I'm not sure why base64 encoding it by hand 
makes any difference for you. Avatica isn't going to be making any 
differentiation in the types of bytes that you send. Bytes are bytes 
are bytes as far as we care.


F21 wrote:

Just reporting back my experiments:

1. Using JSON: I set the serialization of the query server to JSON and
replayed your requests using CURL. For the binary file, I first base64
encode it into a string and then sent it. This worked properly and I can
see data inserted into the table using SquirrelSQL.

2. Using Protobufs: I inserted the binary data using Rep_BYTE_STRING and
set BytesValues to the byte array read in from the file. It inserts
(upserts) correctly, but if I query the table using SquirrelSQL, the
binary column's cell is shown as .

3. Using Protobufs with Base64 encoding: I first encode the binary data
as base64. I then upsert the parameter as Rep_STRING and set StringValue
to the base64 encoded string. This upserts correctly and I can see the
data in SquirrelSQL. I then SELECT the data and base64 decode it and
write it to a file and generate a hash. The file is written correctly
and the hash also matches.

So, approach 3 works for me, but it doesn't seem to be the correct way
to do it.

On 17/04/2016 8:17 AM, Josh Elser wrote:

I wrote a simple test case for this with the gopher image. Here's the
JSON data (but hopefully this is enough to help out). I'd have to
write more code to dump out the actual protobufs. Let me know if this
is insufficient to help you figure out what's wrong. My test worked 
fine.


CREATE TABLE binaryData(id int, data varbinary(262144));

[{"request":"prepare","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","sql":"INSERT 


INTO binaryData values(?,?)","maxRowCount":-1},
{"response":"prepare","statement":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":{"columns":[],"sql":"INSERT 


INTO binaryData
values(?,?)","parameters":[{"signed":true,"precision":10,"scale":0,"parameterType":4,"typeName":"INTEGER","className":"java.lang.Integer","name":"?1"},{"signed":false,"precision":262144,"scale":0,"parameterType":-3,"typeName":"VARBINARY","className":"[B","name":"?2"}],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":"SELECT"}},"rpcMetadata":null}] 




[{"request":"execute","statementHandle":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":null},"parameterValues":[{"type":"INTEGER","value":1},{"type":"BYTE_STRING","value":""}],"maxRowCount":100}, 

{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1,"ownStatement":true,"signature":null,"firstFrame":null,"updateCount":1,"rpcMetadata":null}]}] 



[{"request":"closeStatement","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1}, 


{"response":"closeStatement","rpcMetadata":null}]

[{"request":"prepareAndExecute","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"sql":"SELECT 


*
FROM binaryData","maxRowCount":-1},
{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","oonnectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"ownStatement":true,"signature":{"columns":[{"ordinal&qu

Re: How do I send binary data to avatica?

2016-04-20 Thread F21

Hey Josh,

I was able to discover the source of the problem. I found your Github 
repo showing how to use the phoenix thin client here: 
https://github.com/joshelser/phoenix-queryserver-jdbc-client


I hacked up CreateTables.java to create a table with a VARBINARY column 
and to read a file into a byte array and upsert it into the column.


I was able to use wireshark to capture the traffic to the phoenix query 
server, export the http body and decode it using:  protoc --decode_raw < 
java-request > java-decoded.txt


At the same time, I made my golang driver dump the request into a file 
and used the protoc tool to decode it.


I then used a diff tool to compare both requests.

I noticed the following:
- The binary data must be base64 encoded.
- Even though the rep type is BYTE_STRING, the base64 encoded string 
must be set in string_value and not bytes_values.


However, when I query the binary column, it comes back correctly as raw 
bytes in the bytes_values.


Let me know if you want me to open an issue regarding this on JIRA :)

Here's a gist containing the java source code and decoded protocol 
buffers for the java and golang requests: 
https://gist.github.com/F21/381a62b11bfa2b4fe212dfa328bf7053



On 19/04/2016 9:12 AM, Josh Elser wrote:

F21 wrote:

Hey Josh and Julian,

Thanks for your input regarding this. I also spent hours and hours over
the last few days trying to get to the bottom of this, and it honestly
wasn't the best use of my time either. I am sure being in different
timezones also makes it much harder to this problem resolved quicker.

Because protobufs is a binary protocol, it can be somewhat difficult to
debug. I tried installing a no-longer-maintained wireshark dissector to
but I wasn't able to get it to compile. I also tried a few Fiddler2
plugins and the Charles debugger, but they also ended up being dead 
ends.


There's a low-hanging fruit here of having PQS dump out "stringified" 
message in the console (similar to how I was generating the JSON for 
you). This is some that we need to standardize in Avatica and provide 
as a "feature". It should be trivial to enable RPC logging.


This is also step one and would, likely, be enough for you to diff the 
output between sqlline-thin and your driver to figure out how they vary.



While writing a script to demonstrate the problem might be helpful for a
lot of cases, I don't think it would be helpful here. For example, we
could mandate that a reproduction be written in Java, but the problem
could actually be how the golang protobuf library
(https://github.com/golang/protobuf) is encoding the messages and the
test script would not be able to catch it. At the same time, having
people submitting scripts written in all sorts of different languages
isn't going to help either.

Go has a concept of using "golden files", which I think would be
extremely useful here. This is basically running some test input and
capturing the output as a file (if the test and other checks passes). In
future runs, the tests runs and is compared against the golden files.
Also, if the tests are updated, the golden files can be automatically
updated by just running the tests. This video gives a pretty good
overview of how they work:


Yep. Exactly what I was thinking about (was actually reminded of Hive 
-- they do this to verify their query parser correctness. I think I've 
stumbled along the same idea in some of Julian's other projects).



In the case of driver implementers, I think it would be extremely useful
to have a bunch of "golden files" for us to test against. In our case,
this might just be dumps of encoded protobuf messages for a given input.
These golden files would be automatically generated from the avatica
remote driver test suite. After a successful build, it can save the
protobufs encoded by the remote driver somewhere. Driver implementers
can then just take the input used, generate the output and do a byte by
byte comparison against the golden files. If the driver does not encode
the message correctly, it should be pretty evident.

I would love to hear what ideas you guys have regarding presenting the
input in a language agnostic way. In a lot of cases, it's not simply
having a SQL statement. For example, parameters provided to a statement
might be an external file. Also, meta data like the connection id and
the statement id also needs to be dealt with too.


Again, totally right. SQL only is part of the problem. More thought is 
definitely required from my end :). I also would like to avoid getting 
in the game of maintaining our own full-blown test-harness project if 
at all avoidable.



Maybe something like toml-test (https://github.com/BurntSushi/toml-test)
would be useful, but it lacks a java encoder/decoder.

Anyway, those are just some of my thoughts and I would love to hear what
you guys think would be the best way to implement the test framework.
Hopefully, we can get the issue of in

Re: How do I send binary data to avatica?

2016-04-14 Thread F21
I also tried casting the data to a string and setting it to StringValue 
and the Rep type to STRING.


This works when I store and retrieve strings from the binary column, but 
doesn't work correctly if I try to store something like a small image.


On 14/04/2016 5:03 PM, Julian Hyde wrote:

BytesValue sounds right. I’m not sure why it isn’t working for you.


On Apr 14, 2016, at 6:34 AM, F21 <f21.gro...@gmail.com> wrote:

As mentioned on the list, I am currently working on  a golang client/driver for 
avatica using protobufs for serialization.

I've got all datatypes working, except for BINARY and VARBINARY.

For my test table looks like this:
CREATE TABLE test (int INTEGER PRIMARY KEY, bin BINARY(20), varbin VARBINARY) 
TRANSACTIONAL=false

In go, we have a datatype called a slice of bytes ([]byte) which is essentially 
an array of bytes (8-bits each).

When I generated the golang protobufs using the .proto files, this is the 
definition of TypedValue:
type TypedValue struct {
TypeRep`protobuf:"varint,1,opt,name=type,enum=Rep" 
json:"type,omitempty"`
BoolValue   bool `protobuf:"varint,2,opt,name=bool_value,json=boolValue" 
json:"bool_value,omitempty"`
StringValue string `protobuf:"bytes,3,opt,name=string_value,json=stringValue" 
json:"string_value,omitempty"`
NumberValue int64 `protobuf:"zigzag64,4,opt,name=number_value,json=numberValue" 
json:"number_value,omitempty"`
BytesValues []byte `protobuf:"bytes,5,opt,name=bytes_values,json=bytesValues,proto3" 
json:"bytes_values,omitempty"`
DoubleValue float64 `protobuf:"fixed64,6,opt,name=double_value,json=doubleValue" 
json:"double_value,omitempty"`
Nullbool`protobuf:"varint,7,opt,name=null" 
json:"null,omitempty"`
}

I am currently creating a TypedValue that looks like this when sending binary 
data:
{BYTE_STRING false  0 [116 101 115 116] 0 false}

So, the Rep is set to BYTE_STRING and ByteValues is populated with the string 
"test" in bytes (it's shown here as decimal because I printed it).

The problem is that even though the insert executes properly, if I look at the row 
using SquirrelSQL, both the BINARY and VARBINARY columns are .

Is BYTE_STRING the correct rep type for sending binary data? Do I also have to 
encode my bytes in a special format?

Thanks!




Re: How do I send binary data to avatica?

2016-04-18 Thread F21
This is the test case I just wrote and it passes successfully (my 
knowledge of Java and its syntax is extremely limited, so hopefully it's 
testing the right things):


@Test public void testInsertingFile() throws Exception {
 ConnectionSpec.getDatabaseLock().lock();
 try (Connection conn = getLocalConnection(); Statement stmt = 
conn.createStatement()) {

  assertFalse(stmt.execute("DROP TABLE IF EXISTS test"));
  final String sql = "CREATE TABLE test(bin VARBINARY(15))";
  assertFalse(stmt.execute(sql));
  Path path = Paths.get("/home/user/Desktop/gopher.png");
  byte[] data = Files.readAllBytes(path);
  PreparedStatement pstmt = conn.prepareStatement("INSERT INTO test 
VALUES(?)");

  pstmt.setBytes(1, data);
  pstmt.execute();

  PreparedStatement pstmt2 = conn.prepareStatement("SELECT * FROM 
test");

  final ResultSet resultSet = pstmt2.executeQuery();
  assertTrue(resultSet.next());
  assertThat(resultSet.getBytes(1),
   equalTo(data));

 } finally {
  ConnectionSpec.getDatabaseLock().unlock();
 }
}

I added the snippet to RemoteDriverTest.java and ran the test using mvn 
-DfailIfNoTests=false -Dtest=RemoteDriverTest -Dcheckstyle.skip=true -e test


- Do the tests in RemoteDriverTest.java spin up a real avatica server to 
test against?

- If so, do they use protobufs?

I am not familiar with phoenix and avatica's internals, but could this 
be a problem with phoenix? However, it does work correct when the binary 
data is base64 encoded and JSON serialization rather than protobufs are 
used.


On 19/04/2016 9:12 AM, Josh Elser wrote:

F21 wrote:

Hey Josh and Julian,

Thanks for your input regarding this. I also spent hours and hours over
the last few days trying to get to the bottom of this, and it honestly
wasn't the best use of my time either. I am sure being in different
timezones also makes it much harder to this problem resolved quicker.

Because protobufs is a binary protocol, it can be somewhat difficult to
debug. I tried installing a no-longer-maintained wireshark dissector to
but I wasn't able to get it to compile. I also tried a few Fiddler2
plugins and the Charles debugger, but they also ended up being dead 
ends.


There's a low-hanging fruit here of having PQS dump out "stringified" 
message in the console (similar to how I was generating the JSON for 
you). This is some that we need to standardize in Avatica and provide 
as a "feature". It should be trivial to enable RPC logging.


This is also step one and would, likely, be enough for you to diff the 
output between sqlline-thin and your driver to figure out how they vary.



While writing a script to demonstrate the problem might be helpful for a
lot of cases, I don't think it would be helpful here. For example, we
could mandate that a reproduction be written in Java, but the problem
could actually be how the golang protobuf library
(https://github.com/golang/protobuf) is encoding the messages and the
test script would not be able to catch it. At the same time, having
people submitting scripts written in all sorts of different languages
isn't going to help either.

Go has a concept of using "golden files", which I think would be
extremely useful here. This is basically running some test input and
capturing the output as a file (if the test and other checks passes). In
future runs, the tests runs and is compared against the golden files.
Also, if the tests are updated, the golden files can be automatically
updated by just running the tests. This video gives a pretty good
overview of how they work:


Yep. Exactly what I was thinking about (was actually reminded of Hive 
-- they do this to verify their query parser correctness. I think I've 
stumbled along the same idea in some of Julian's other projects).



In the case of driver implementers, I think it would be extremely useful
to have a bunch of "golden files" for us to test against. In our case,
this might just be dumps of encoded protobuf messages for a given input.
These golden files would be automatically generated from the avatica
remote driver test suite. After a successful build, it can save the
protobufs encoded by the remote driver somewhere. Driver implementers
can then just take the input used, generate the output and do a byte by
byte comparison against the golden files. If the driver does not encode
the message correctly, it should be pretty evident.

I would love to hear what ideas you guys have regarding presenting the
input in a language agnostic way. In a lot of cases, it's not simply
having a SQL statement. For example, parameters provided to a statement
might be an external file. Also, meta data like the connection id and
the statement id also needs to be dealt with too.


Again, totally right. SQL only is part of the problem. More thought is 
definitely required from my end :). I also would

Re: What is the default value of max_rows_total in the avatica server?

2016-08-21 Thread F21

Just opened CALCITE-1352 to track this :)

On 22/08/2016 1:34 PM, Josh Elser wrote:
Anything <=0 should be treated as "all results" (at least given what I 
recall from the code comments). Would be good to verify regardless. 
It's entirely possible that when I introduced the new attribute, I 
(unnecessarily) changed the previous semantics :)


F21 wrote:

I tried setting it to -1 and it appears to return unlimited rows. Maybe
this is not a bug, but just needs to be better documented?

On 22/08/2016 11:59 AM, Josh Elser wrote:

Hrm, that sounds like a bug. The default value of '0' and an explicit
value of '0' should have the exact same semantics (an unlimited number
of values for the statement).

Mind filing a bug?

F21 wrote:

Hi Josh,

Thanks! That clears things up. I noticed that if I explicitly set
max_rows_total to 0, it returns 0 rows, which is different from 
omitting

max_rows_total.

Is this the expected behavior? Is it possible to set max_rows_total 
and

have it behave like the default (return all rows without any limit)?

Cheers,
Francis

On 20/08/2016 12:35 AM, Josh Elser wrote:

Hi Francis,

I tried to do explicit testing here to make sure that the 1.7 to 1.8
upgrade should be painless for you. In places where the protocol has
changed, the old attributes and functionality remains (we explicitly
check for the old value and the absence of the new value).

max_rows_total defaults to 0 which is directly set on the JDBC
Statement object. The value of 0 is special in that it means there is
no limit on the number of results returned by that Statement (this
query).

F21 wrote:

Hey guys,

I am in the process of upgrading the go (golang) avatica driver to
support Calcite/Avatica 1.8.0.

I noticed that the protobuf definitions deprecated max_row_count in
favor of max_rows_total.

I also noticed that if I do not include max_rows_total in a 
Prepare or

PrepareAndExecute request, the server will still accept the request.

In the event that max_rows_total is not provided, what is the 
default

value/behavior assigned by the server?

Cheers,

Francis









Re: What is the default value of max_rows_total in the avatica server?

2016-08-21 Thread F21
I tried setting it to -1 and it appears to return unlimited rows. Maybe 
this is not a bug, but just needs to be better documented?


On 22/08/2016 11:59 AM, Josh Elser wrote:
Hrm, that sounds like a bug. The default value of '0' and an explicit 
value of '0' should have the exact same semantics (an unlimited number 
of values for the statement).


Mind filing a bug?

F21 wrote:

Hi Josh,

Thanks! That clears things up. I noticed that if I explicitly set
max_rows_total to 0, it returns 0 rows, which is different from omitting
max_rows_total.

Is this the expected behavior? Is it possible to set max_rows_total and
have it behave like the default (return all rows without any limit)?

Cheers,
Francis

On 20/08/2016 12:35 AM, Josh Elser wrote:

Hi Francis,

I tried to do explicit testing here to make sure that the 1.7 to 1.8
upgrade should be painless for you. In places where the protocol has
changed, the old attributes and functionality remains (we explicitly
check for the old value and the absence of the new value).

max_rows_total defaults to 0 which is directly set on the JDBC
Statement object. The value of 0 is special in that it means there is
no limit on the number of results returned by that Statement (this
query).

F21 wrote:

Hey guys,

I am in the process of upgrading the go (golang) avatica driver to
support Calcite/Avatica 1.8.0.

I noticed that the protobuf definitions deprecated max_row_count in
favor of max_rows_total.

I also noticed that if I do not include max_rows_total in a Prepare or
PrepareAndExecute request, the server will still accept the request.

In the event that max_rows_total is not provided, what is the default
value/behavior assigned by the server?

Cheers,

Francis







What is the default value of max_rows_total in the avatica server?

2016-08-19 Thread F21

Hey guys,

I am in the process of upgrading the go (golang) avatica driver to 
support Calcite/Avatica 1.8.0.


I noticed that the protobuf definitions deprecated max_row_count in 
favor of max_rows_total.


I also noticed that if I do not include max_rows_total in a Prepare or 
PrepareAndExecute request, the server will still accept the request.


In the event that max_rows_total is not provided, what is the default 
value/behavior assigned by the server?


Cheers,

Francis



Re: What is the default value of max_rows_total in the avatica server?

2016-08-19 Thread F21

Hi Josh,

Thanks! That clears things up. I noticed that if I explicitly set 
max_rows_total to 0, it returns 0 rows, which is different from omitting 
max_rows_total.


Is this the expected behavior? Is it possible to set max_rows_total and 
have it behave like the default (return all rows without any limit)?


Cheers,
Francis

On 20/08/2016 12:35 AM, Josh Elser wrote:

Hi Francis,

I tried to do explicit testing here to make sure that the 1.7 to 1.8 
upgrade should be painless for you. In places where the protocol has 
changed, the old attributes and functionality remains (we explicitly 
check for the old value and the absence of the new value).


max_rows_total defaults to 0 which is directly set on the JDBC 
Statement object. The value of 0 is special in that it means there is 
no limit on the number of results returned by that Statement (this 
query).


F21 wrote:

Hey guys,

I am in the process of upgrading the go (golang) avatica driver to
support Calcite/Avatica 1.8.0.

I noticed that the protobuf definitions deprecated max_row_count in
favor of max_rows_total.

I also noticed that if I do not include max_rows_total in a Prepare or
PrepareAndExecute request, the server will still accept the request.

In the event that max_rows_total is not provided, what is the default
value/behavior assigned by the server?

Cheers,

Francis





Re: [ACCUMULO] New committer: Francis Chuang

2016-09-05 Thread F21

Hey everyone,

Thanks for inviting me to become a committer. I am currently running my 
own startup Boostport. We build tools to allow people to run, grow and 
automate their businesses.


I am currently using Phoenix (and by extension Calcite + Avatica) to 
implement data storage for some of our newer services in the system.


Calcite + Avatica is super awesome, so I hope that we can bring more 
people from other languages on-board. I look forward to adding more 
improvements to the Go driver, writing more documentation and helping 
out on the mailing list!


Cheers,
Francis

On 5/09/2016 10:15 AM, Josh Elser wrote:

All,

I'm happy to welcome Francis as an Apache Calcite committer. This is 
in recognition of Francis' continued contributions to the project, 
most notably in support of the development and stability of Avatica. 
Francis has been a great source of help for Avatica, reporting 
numerous issues discovered through the development of a Go-lang client 
for Avatica.


Francis, feel free to introduce yourself to the community and include 
some words about yourself if you'd like. We all look forward to your 
continued (and hopefully increased) involvement in the project!


- Josh (On behalf of the Apache Calcite PMC)





Re: Using Calcite in .Net environment

2016-09-10 Thread F21
If you want to talk to Calcite from .NET, the best way would be to use 
the Avatica server. It is an HTTP server that allows you to talk to the 
calcite backend using protobufs or JSON.


Here's a .NET client that talks to the Avatica server, but is currently 
targeted towards Apache Phoenix (which uses Avatica under the hood for 
its query server): https://github.com/Azure/hdinsight-phoenix-sharp


Cheers,
Francis

On 9/09/2016 6:26 AM, Rawat, Rishi wrote:

Hi Calcite team

We are trying to create a data virtualization engine in our technology stack 
which is a .Net/C# layer.
What is the best way in which we can achieve using Calcite from a .Net.

Also we are Sql server heavy in our operations(also data operations expanding 
to other data source types). Is there any example that we can try for sql 
server.


Thanks in advance.
Rishi


The information contained in this message is intended only for the recipient, and may be a 
confidential attorney-client communication or may otherwise be privileged and confidential and 
protected from disclosure. If the reader of this message is not the intended recipient, or an 
employee or agent responsible for delivering this message to the intended recipient, please be 
aware that any dissemination or copying of this communication is strictly prohibited. If you 
have received this communication in error, please immediately notify us by replying to the 
message and deleting it from your computer. S Global Inc. reserves the right, subject to 
applicable local law, to monitor, review and process the content of any electronic message or 
information sent to or from S Global Inc. e-mail addresses without informing the sender 
or recipient of the message. By sending electronic message or information to S Global 
Inc. e-mail addresses you, as the sender, are consenting to S Global Inc. processing any 
of your personal data therein.






Re: is Travis build for master broken

2016-10-05 Thread F21
Ultimately, I think using the ASF Jenkins instance would be the best 
way. I believe it currently builds pushes to the ASF git repo.


Kafka has their set up their PRs to build using their Jenkins instance, 
for example: https://github.com/apache/kafka/pull/1969


If this plugin is installed, we can get a build status image as well: 
https://wiki.jenkins-ci.org/display/JENKINS/Embeddable+Build+Status+Plugin


How difficult would it be to set up Jenkins to build PRs? I noticed that 
CALCITE-1084 is currently still open. If it will be a while before this 
can happen, maybe wercker can be a good stop-gap solution in the meantime.


Francis

On 5/10/2016 10:58 AM, Josh Elser wrote:
We could/should also look at using the ASF-provided Jenkins instance. 
I think I probably still have a JIRA issue open somewhere to wire up 
Apache Yetus for Calcite/Avatica that I never finished (oops).


Julian Hyde wrote:
Good idea. I’ve logged 
https://issues.apache.org/jira/browse/CALCITE-1412<https://issues.apache.org/jira/browse/CALCITE-1412>.



On Oct 4, 2016, at 4:49 PM, F21<f21.gro...@gmail.com>  wrote:

Also, you can check out the builds/runs here: 
https://app.wercker.com/F21/calcite/runs


The failures earlier were due to me adjusting the wercker config.

On 5/10/2016 3:08 AM, Julian Hyde wrote:

Francis,

Try removing the depth argument when you do 'git clone'.

Julian


On Oct 3, 2016, at 21:30, F21<f21.gro...@gmail.com>  wrote:

I tried to force travis to use a standard trusty vm in 
https://github.com/apache/calcite/pull/293, so that the VM is 
assigned 7.5GB of memory. Unfortunately, travis is still quite 
flaky, with tests only passing some of the time.


Would it be useful to move to another CI? I tried setting up my 
fork to use wercker: 
https://github.com/F21/calcite/blob/use-wercker/wercker.yml which 
failed here: 
https://app.wercker.com/F21/calcite/runs/build-jdk-7/57f3172e3b89220100273603?step=57f317381473219140d0


Unfortunately, I haven't found a solution to this error:

[ERROR] Failed to execute goal 
pl.project13.maven:git-commit-id-plugin:2.1.9:revision (default) 
on project calcite: Execution default of goal 
pl.project13.maven:git-commit-id-plugin:2.1.9:revision failed: 
Unable to calculate distance between [commit 
be18b25415a305aab2d0de2bd074755630e08462 1475548960 sp] and 
[commit 08c56b158ffcfcf205a919cc9fff77a692e649f6 1474099831 
sp]: Missing commit 63c51d0c6459a4de5cab01188a7f3b7dd1a259fb 
->  [Help 1]


Francis


On 4/10/2016 10:55 AM, Julian Hyde wrote:
I’ve noticed it. A lot of Travis builds seem to hang. See the 
last two lines of 
https://travis-ci.org/apache/calcite/jobs/163844678:<https://travis-ci.org/apache/calcite/jobs/163844678:>


  No output has been received in the last 10 minutes, this 
potentially

indicates a stalled build or something wrong with the build itself.

The build has been terminated
I don’t know why Travis does this. I suspect that the Travis VM 
doesn’t have enough memory to complete the task, or the job takes 
longer than Travis allows, but these are wild guesses.


I have been running the build and tests on my own Linux server in 
a variety of configurations (in particular JDK 1.7 and 1.8, and a 
couple of timezones, and occasionally also on macOS and Windows), 
and I can confirm that everything is stable.


Julian


On Oct 3, 2016, at 3:20 PM, Laurent Goujon<laur...@dremio.com>  
wrote:


Hi,

I just opened a pull request against Calcite master, and the 
travis build
failed. But then I checked the master branch, and according to 
Travis, it's

also failing.

Is it something known, or just a transient issue?

Thanks in advance,

Laurent









Re: is Travis build for master broken

2016-10-04 Thread F21

Hey Julian,

Thanks for the tip! Wercker clones with a depth of 50 by default. I have 
now added a few more commands to convert the shallow clone to a full clone.


I am now able to run the tests and get them passing. Wercker is also 
quite stable, I ran the tests a few times and they all pass each time.


Compare to travis, wercker is also much faster, tests take about 9 
minutes to run compared to the 25ish minutes with travis.


Would you be interested in moving the CI over to wercker? You can check 
out my branch here: https://github.com/F21/calcite/tree/use-wercker 
(.travis.yml is replaced with wercker.yml).


Cheers,
Francis

On 5/10/2016 3:08 AM, Julian Hyde wrote:

Francis,

Try removing the depth argument when you do 'git clone'.

Julian


On Oct 3, 2016, at 21:30, F21 <f21.gro...@gmail.com> wrote:

I tried to force travis to use a standard trusty vm in 
https://github.com/apache/calcite/pull/293, so that the VM is assigned 7.5GB of 
memory. Unfortunately, travis is still quite flaky, with tests only passing 
some of the time.

Would it be useful to move to another CI? I tried setting up my fork to use 
wercker: https://github.com/F21/calcite/blob/use-wercker/wercker.yml which 
failed here: 
https://app.wercker.com/F21/calcite/runs/build-jdk-7/57f3172e3b89220100273603?step=57f317381473219140d0

Unfortunately, I haven't found a solution to this error:

[ERROR] Failed to execute goal 
pl.project13.maven:git-commit-id-plugin:2.1.9:revision (default) on project 
calcite: Execution default of goal 
pl.project13.maven:git-commit-id-plugin:2.1.9:revision failed: Unable to calculate 
distance between [commit be18b25415a305aab2d0de2bd074755630e08462 1475548960 
sp] and [commit 08c56b158ffcfcf205a919cc9fff77a692e649f6 1474099831 sp]: 
Missing commit 63c51d0c6459a4de5cab01188a7f3b7dd1a259fb -> [Help 1]

Francis


On 4/10/2016 10:55 AM, Julian Hyde wrote:
I’ve noticed it. A lot of Travis builds seem to hang. See the last two lines of 
https://travis-ci.org/apache/calcite/jobs/163844678: 
<https://travis-ci.org/apache/calcite/jobs/163844678:>


  No output has been received in the last 10 minutes, this potentially
indicates a stalled build or something wrong with the build itself.

The build has been terminated

I don’t know why Travis does this. I suspect that the Travis VM doesn’t have 
enough memory to complete the task, or the job takes longer than Travis allows, 
but these are wild guesses.

I have been running the build and tests on my own Linux server in a variety of 
configurations (in particular JDK 1.7 and 1.8, and a couple of timezones, and 
occasionally also on macOS and Windows), and I can confirm that everything is 
stable.

Julian



On Oct 3, 2016, at 3:20 PM, Laurent Goujon <laur...@dremio.com> wrote:

Hi,

I just opened a pull request against Calcite master, and the travis build
failed. But then I checked the master branch, and according to Travis, it's
also failing.

Is it something known, or just a transient issue?

Thanks in advance,

Laurent





Re: is Travis build for master broken

2016-10-04 Thread F21
Also, you can check out the builds/runs here: 
https://app.wercker.com/F21/calcite/runs


The failures earlier were due to me adjusting the wercker config.

On 5/10/2016 3:08 AM, Julian Hyde wrote:

Francis,

Try removing the depth argument when you do 'git clone'.

Julian


On Oct 3, 2016, at 21:30, F21 <f21.gro...@gmail.com> wrote:

I tried to force travis to use a standard trusty vm in 
https://github.com/apache/calcite/pull/293, so that the VM is assigned 7.5GB of 
memory. Unfortunately, travis is still quite flaky, with tests only passing 
some of the time.

Would it be useful to move to another CI? I tried setting up my fork to use 
wercker: https://github.com/F21/calcite/blob/use-wercker/wercker.yml which 
failed here: 
https://app.wercker.com/F21/calcite/runs/build-jdk-7/57f3172e3b89220100273603?step=57f317381473219140d0

Unfortunately, I haven't found a solution to this error:

[ERROR] Failed to execute goal 
pl.project13.maven:git-commit-id-plugin:2.1.9:revision (default) on project 
calcite: Execution default of goal 
pl.project13.maven:git-commit-id-plugin:2.1.9:revision failed: Unable to calculate 
distance between [commit be18b25415a305aab2d0de2bd074755630e08462 1475548960 
sp] and [commit 08c56b158ffcfcf205a919cc9fff77a692e649f6 1474099831 sp]: 
Missing commit 63c51d0c6459a4de5cab01188a7f3b7dd1a259fb -> [Help 1]

Francis


On 4/10/2016 10:55 AM, Julian Hyde wrote:
I’ve noticed it. A lot of Travis builds seem to hang. See the last two lines of 
https://travis-ci.org/apache/calcite/jobs/163844678: 
<https://travis-ci.org/apache/calcite/jobs/163844678:>


  No output has been received in the last 10 minutes, this potentially
indicates a stalled build or something wrong with the build itself.

The build has been terminated

I don’t know why Travis does this. I suspect that the Travis VM doesn’t have 
enough memory to complete the task, or the job takes longer than Travis allows, 
but these are wild guesses.

I have been running the build and tests on my own Linux server in a variety of 
configurations (in particular JDK 1.7 and 1.8, and a couple of timezones, and 
occasionally also on macOS and Windows), and I can confirm that everything is 
stable.

Julian



On Oct 3, 2016, at 3:20 PM, Laurent Goujon <laur...@dremio.com> wrote:

Hi,

I just opened a pull request against Calcite master, and the travis build
failed. But then I checked the master branch, and according to Travis, it's
also failing.

Is it something known, or just a transient issue?

Thanks in advance,

Laurent





Re: [DISCUSS] How to merge an Avatica client into the Calcite project (was Re: Golang driver for Phoenix and Avatica available)

2016-11-08 Thread F21

Hey all,

My thoughts in-line:

On 9/11/2016 4:34 AM, Josh Elser wrote:

Re-vitalizing some discussion here (updating the subject accordingly)

Thinking ahead with the assumption that the PMC and Francis all want 
to bring this code under governance of the Calcite PMC, what does 
"adoption of an Avatica client" really entail? My thoughts..


* Perform ASF IP Clearance [1]. Dbl-checking copyright is the most 
difficult thing (in most cases, it shouldn't be difficult).
This shouldn't be a problem. However the project imports some 
dependencies that are not licensed under the Apache 2 License. The 
dependencies are not stored in the git repository, but are imported by 
the user's package manager. Will this be an issue? See dependency list 
here: https://github.com/Boostport/avatica/blob/master/vendor/vendor.json

* Create a new repository for the new client
+1. In general, it is best to have the code in its own repository, so 
that it's easier to import.
* Define what (if any) released artifacts are (in addition to 
source-code)
Go library maintainers do not release any artifacts (in general), and we 
are not providing any artifacts, so this should not be a problem.

* Ensure code meets licensing requirements of ASF
The code is licensed under the Apache 2 License. Is there anything else 
that needs to be done?

* Establish build, test, and release steps for the new client
This should be quite straight forward. Since we do not need to build 
anything, we just need to make sure we have tests running on master and 
any pull requests. When required, pushing a tag will generate a release. 
As Go's package management story is somewhat fragmented, we do need to 
provide some instructions on installing the package and pulling in the 
dependencies. It currently uses govendor, but I want to switch to glide 
soon, as glide is more mature and works better (from my experience).

* Establish CI for commits/patches (manual, if not automated?)
Automated CI is a must. I currently use Wercker, as it allows me to 
easily build in docker containers using the official Go image. I can 
then easily update Go's version by switching the image in the 
configuration file. I believe Jenkins does support using docker for 
builds, so I hope this is something that is possible. For commits and 
patches, I would prefer to use Github pull requests, just like Calcite. 
We should also get the ASF bot to automatically build PRs and ping the 
relevant JIRAs (if any).


My thinking is that we can rely on the backwards compatibility 
guarantees of Avatica's wire protocol to deal with the differing 
release schedules. Specifically, an older version of the Golang client 
should still work with newer versions of the Avatica server -- they 
should not require simultaneous releases of all Avatica clients with 
the de-facto Java client and Avatica server.


+1. In fact, I believe that most clients have switched to using 
protobufs, so backwards compatibility should not be a big problem unless 
we accidentally break something.
If we can codify these practices, it would make adoption of future 
clients less intimidating, letting us focus on the positives of having 
a large collection of clients in other languages.


Thoughts?
+1 Having lots of clients would be a huge plus for Calcite and other 
projects such as Phoenix, etc. Go 1.8, which is due in a few months will 
bring a lot of new features to the database/sql package (see 
https://docs.google.com/document/d/1F778e7ZSNiSmbju3jsEWzShcb8lIO4kDyfKDNm4PNd8/edit?usp=sharing). 
I haven't had a chance to update the project to include the new 
features, but I am really excited :)


- Josh

[1] http://incubator.apache.org/ip-clearance/

Julian Hyde wrote:
Probably not, in the near term at least. The two projects have 
separate release schedules, artifacts and web sites, which seems to 
give them enough breathing space for now. Maybe we’ll split source 
code repositories at some point. Because the code is all managed by 
Apache’s IP governance, any option we choose would be fairly 
straightforward.


Julian



On May 17, 2016, at 8:11 PM, F21<f21.gro...@gmail.com>  wrote:

Thanks for opening the issue on JIRA, Julian.

Let me know if there's anything I can do to speed up the process. 
Will Avatica be spun out as its own project?


Francis

On 18/05/2016 1:06 PM, Julian Hyde wrote:
It sounds as if there is general approval for this. I have logged 
https://issues.apache.org/jira/browse/CALCITE-1240<https://issues.apache.org/jira/browse/CALCITE-1240> 
to track.


Julian


On May 17, 2016, at 8:00 PM, Josh Elser<josh.el...@gmail.com>  wrote:

Big +1 from me.

I think if you're amenable to it, Francis, I'm more than willing 
to help make this a "formal" part of Avatica!


Congrats and great work on what you have done already!

F21 wrote:

That would be really great! I think that would help make a lot of
Phoenix drivers currently available to support avatica 
generically. It

Dependency-free avatica client

2017-04-02 Thread F21

Hi guys,

I discovered a universal sql CLI client written in golang: 
https://github.com/knq/usql


The author has added support using my Avatica golang driver 
(github.com/Boostport/avatica). The client has no dependencies and 
compiles down to a single binary. This makes it easy to start querying 
your avatica servers without having to setup a Java environment.


Francis



Go Avatica/Phoenix driver updated to support new Go 1.8 features

2017-03-08 Thread F21

Hi all,

Go 1.8 was released recently and the database/sql package saw a lot of 
new features. I just tagged the v1.3.0 release for the Go Avatica 
driver[0] which ships all of these new features.


The full list of changes in the database/sql package is available here: 
https://docs.google.com/document/d/1F778e7ZSNiSmbju3jsEWzShcb8lIO4kDyfKDNm4PNd8/edit


Backwards compatibility:

- The new interfaces/methods are all additive. The implementation is 
also backwards compatible, so v1.3.0 will work with Go 1.7.x and below.


Highlights:

- Methods now support using context to enable cancellation and timeouts 
so that queries can be cancelled on the client side. Note: Since there 
is no mechanism to cancel queries on the server, once a query is sent to 
the server, users should assume that it will be executed.


- The Ping method will now connect to the server and execute `SELECT 1` 
to ensure that the server is ok.


- More column type information: It is now possible to get the column 
name, type, length, precision + scale, nullability and the Go scan type 
for a column in a result set.


- Support for multiple result sets. Avatica had support for multiple 
result sets for a while and this mapped really well to the multiple 
result sets support introduced in Go 1.8.


Unimplemented features:

- Since Calcite/Avatica does not support named bind parameters in 
prepared statements, the driver will throw an error if you try to use them.


If you have any question or comments, please let me know!

Cheers,

Francis

[0] https://github.com/Boostport/avatica



Avatica and retrying in HA mode

2017-07-17 Thread F21

I am investigating HA support for the Go Avatica driver.

Let's assume a scenario where we have multiple Avatica servers behind a 
load balancer and do not want to use sticky sessions.


Currently, these are some of the questions I have:

1. A connection with an id "123" opened. A server handles the request 
and then fails immediately after. We then call a 
PrepareAndExecuteRequest using the connection_id of "123". In this case, 
what happens? ExecuteResponse does not appear to have a field telling us 
the connection_id is invalid.


2. A connection with an id "456" is opened. A server handles the request 
and we call PrepareAndExecuteRequest. This returns a resultset. The 
server fails at this point. We then call a FetchRequest to fetch more 
rows, but the server has no record of this query. If missing_statement 
is true, we recreate the statement on a new connection. What is the 
expected corrective action if missing_results is true? I am assuming we 
use a SyncResultsRequest to sync the results, and if SyncResultResponse 
has missing_statement set to true, because the server failed, we 
recreate the statement and retry. Are there any cases where 
missing_statement is false but missing_results is true (what causes this 
scenario)? Also, how about missing_statement = true and missing_results 
= false?


3. I noticed that with ExecuteRequest, FetchRequest and 
ExecuteBatchRequest, there is sufficient information in the 
statement_handle and other fields to allow Avatica to automatically 
recreate the statement or result set on the server if it is missing. Is 
there any reason why this is not being automatically handled by Avatica?


Cheers,
Francis



Changes to first_frame_max_size in Avatica's ExecuteRequest

2017-07-17 Thread F21
In CALCITE-1353, the first_frame_max_size was changed from a uint64 to 
int32 because the value can be negative.


The change to the protobuf definitions can be seen here: 
https://github.com/apache/calcite-avatica/blob/master/core/src/main/protobuf/requests.proto#L132


I've been making some updates to the Go driver and will be pulling in 
the new protobuf definitions from the calcite-avatica source tree to 
reflect this change.


From what I can see, does this mean backwards compatibility is broken? 
Older clients will try to send a uint64 for the first_frame_max_size 
field, but the server is expecting a int32. Newer clients will send a 
int32, but an older server would be expecting a uint64.


What is the behavior if both deprecated_first_frame_max_size and 
first_frame_max_size are sent to the server?


In this case, I'll probably bump a new major release for the driver to 
communicate this backwards-incompatibility.


Francis



Is Avatica's ResultSetResponse's Signature field always present?

2017-07-11 Thread F21
I have a bug report for the Go Avatica driver where someone executed an 
`UPSERT` statement and caused the driver to crash. See 
https://github.com/Boostport/avatica/issues/34


The driver crashed, because we tried to read 
`ResultSetResponse.Signature` and it was null as the statement was an 
upsert statement.


According to the protobuf documentation [0], signature is non-optional 
and should always be present. Does this guarantee extend to data 
modification statements like UPSERT?


Cheers,

Francis


[0] 
https://calcite.apache.org/avatica/docs/protobuf_reference.html#resultsetresponse




Re: Kerberos Authentication and Avatica

2017-07-10 Thread F21

Hey Josh,

Thanks for clearing things up. In Go, it is not idiomatic for a database 
driver to reach out to environment variables. I think I will add an 
additional parameter called `krb5Conf` for users to point the driver to 
the location of `krb5.conf`. In the event that it is not provided, I 
plan to search common locations listed here: 
https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/rsec_SPNEGO_config_krb5.html 
and 
https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html 



Regarding the use-case where the user performs authentication and passes 
the ticket to Avatica, what does the driver configuration look like? In 
particular, if I were using the Java driver, is it correct to assume 
that I'd set `authentication` to `SPNEGO` and leave `keytab` and 
`principal` as blank? In that case, I am assuming the Java Kerberos 
library would find the cached ticket and set up the appropriate HTTP 
requests.


Cheers,
Francis

On 11/07/2017 12:49 AM, Josh Elser wrote:

Hey Francis,

On 7/10/17 7:09 AM, F21 wrote:

Follow up questions:
- According to the client reference for the principal parameter [0], 
the Java client is able to perform a Kerberos login before contacting 
the Avatica server. There appears to be no way to set the KDC address 
into the client. How does the Java client perform Kerberos logins?


This is convention for Java. There are expected locations at which a 
file, krb5.conf, is located on platforms. For Linux, this is 
/etc/krb5.conf.


- There is also an option for the user to perform the login 
themselves. In this case, how does the Java client pass the Kerberos 
ticket to the Avatica server?


Again, convention. On Linux, the location of a user's ticket cache is 
defined to be /tmp/krb5cc_$(id -u $(whoami)). This location can be 
overriden by the environment variable KRB5CCNAME. All of this is 
handled by Java itself.


This is definitely the common case for interactive users.

[0] 
https://calcite.apache.org/avatica/docs/client_reference.html#principal


On 10/07/2017 3:57 PM, F21 wrote:
Recently, I came across a maintained pure-go kerberos client and 
server [0].


I am now in the process of adding SPNEGO authentication to the Go 
avatica client [1].


For the implementation, the plan is to make it as close to the 
official (java) client's implementation as possible. For SPNEGO, to 
Java client uses these 2 parameters: principal and keytab.


The keytab parameter is easy to understand: a path to a keytab file.

I'd like to confirm what a valid string for the principal looks like.
- Is it a Service Principal Name?
- What are the valid formats for the principal? A valid SPN looks 
like User1/User2@realm.

- For the above example, I am assuming user2 can be optional.
- Can the realm be optional?


See 
http://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html. 
This page does a very good job at concisely expressing what a Kerberos 
principal is and what can be implied (based on krb5.conf).


Let me know if you still have questions.


Cheers,
Francis

[0] https://github.com/jcmturner/gokrb5
[1] https://github.com/Boostport/avatica







Kerberos Authentication and Avatica

2017-07-10 Thread F21

Recently, I came across a maintained pure-go kerberos client and server [0].

I am now in the process of adding SPNEGO authentication to the Go 
avatica client [1].


For the implementation, the plan is to make it as close to the official 
(java) client's implementation as possible. For SPNEGO, to Java client 
uses these 2 parameters: principal and keytab.


The keytab parameter is easy to understand: a path to a keytab file.

I'd like to confirm what a valid string for the principal looks like.
- Is it a Service Principal Name?
- What are the valid formats for the principal? A valid SPN looks like 
User1/User2@realm.

- For the above example, I am assuming user2 can be optional.
- Can the realm be optional?

Cheers,
Francis

[0] https://github.com/jcmturner/gokrb5
[1] https://github.com/Boostport/avatica


Re: Kerberos Authentication and Avatica

2017-07-10 Thread F21

Follow up questions:
- According to the client reference for the principal parameter [0], the 
Java client is able to perform a Kerberos login before contacting the 
Avatica server. There appears to be no way to set the KDC address into 
the client. How does the Java client perform Kerberos logins?


- There is also an option for the user to perform the login themselves. 
In this case, how does the Java client pass the Kerberos ticket to the 
Avatica server?


[0] https://calcite.apache.org/avatica/docs/client_reference.html#principal

On 10/07/2017 3:57 PM, F21 wrote:
Recently, I came across a maintained pure-go kerberos client and 
server [0].


I am now in the process of adding SPNEGO authentication to the Go 
avatica client [1].


For the implementation, the plan is to make it as close to the 
official (java) client's implementation as possible. For SPNEGO, to 
Java client uses these 2 parameters: principal and keytab.


The keytab parameter is easy to understand: a path to a keytab file.

I'd like to confirm what a valid string for the principal looks like.
- Is it a Service Principal Name?
- What are the valid formats for the principal? A valid SPN looks like 
User1/User2@realm.

- For the above example, I am assuming user2 can be optional.
- Can the realm be optional?

Cheers,
Francis

[0] https://github.com/jcmturner/gokrb5
[1] https://github.com/Boostport/avatica





Re: Kerberos Authentication and Avatica

2017-07-11 Thread F21

Thanks for the pointers, Josh :)

I'll post back to the list when a release has been tagged.

On 11/07/2017 11:38 AM, Josh Elser wrote:

On Jul 10, 2017 20:18, "F21" <f21.gro...@gmail.com> wrote:

Hey Josh,

Thanks for clearing things up. In Go, it is not idiomatic for a database
driver to reach out to environment variables. I think I will add an
additional parameter called `krb5Conf` for users to point the driver to the
location of `krb5.conf`. In the event that it is not provided, I plan to
search common locations listed here: https://www.ibm.com/support/kn
owledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/rs
ec_SPNEGO_config_krb5.html and https://docs.oracle.com/javase
/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html


Sounds reasonable to me!

Regarding the use-case where the user performs authentication and passes
the ticket to Avatica, what does the driver configuration look like? In
particular, if I were using the Java driver, is it correct to assume that
I'd set `authentication` to `SPNEGO` and leave `keytab` and `principal` as
blank? In that case, I am assuming the Java Kerberos library would find the
cached ticket and set up the appropriate HTTP requests.


Exactly right. The user does nothing special, and then the underlying Java
security code provides it when the HTTP client library asks for the ticket.

Cheers,
Francis

On 11/07/2017 12:49 AM, Josh Elser wrote:


Hey Francis,

On 7/10/17 7:09 AM, F21 wrote:


Follow up questions:
- According to the client reference for the principal parameter [0], the
Java client is able to perform a Kerberos login before contacting the
Avatica server. There appears to be no way to set the KDC address into the
client. How does the Java client perform Kerberos logins?


This is convention for Java. There are expected locations at which a file,
krb5.conf, is located on platforms. For Linux, this is /etc/krb5.conf.

- There is also an option for the user to perform the login themselves. In

this case, how does the Java client pass the Kerberos ticket to the Avatica
server?


Again, convention. On Linux, the location of a user's ticket cache is
defined to be /tmp/krb5cc_$(id -u $(whoami)). This location can be
overriden by the environment variable KRB5CCNAME. All of this is handled by
Java itself.

This is definitely the common case for interactive users.

[0] https://calcite.apache.org/avatica/docs/client_reference.htm

l#principal

On 10/07/2017 3:57 PM, F21 wrote:


Recently, I came across a maintained pure-go kerberos client and server
[0].

I am now in the process of adding SPNEGO authentication to the Go
avatica client [1].

For the implementation, the plan is to make it as close to the official
(java) client's implementation as possible. For SPNEGO, to Java client uses
these 2 parameters: principal and keytab.

The keytab parameter is easy to understand: a path to a keytab file.

I'd like to confirm what a valid string for the principal looks like.
- Is it a Service Principal Name?
- What are the valid formats for the principal? A valid SPN looks like
User1/User2@realm.
- For the above example, I am assuming user2 can be optional.
- Can the realm be optional?


See http://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-use
r/What-is-a-Kerberos-Principal_003f.html. This page does a very good job
at concisely expressing what a Kerberos principal is and what can be
implied (based on krb5.conf).

Let me know if you still have questions.

Cheers,

Francis

[0] https://github.com/jcmturner/gokrb5
[1] https://github.com/Boostport/avatica








Go Client v2.0.0 released

2017-07-18 Thread F21

Hi all,

I just tagged v2.0.0 for the Go database/sql driver. This will be the 
only supported version for Avatica 1.10.0 and Phoenix 4.11.0 on-wards 
due to the backwards-incompatible change to fix CALCITE-1353 . For older 
versions of Avatica and Phoenix, please use the v1.x.x series. For this 
release, I believe the Go client is at feature-parity with the Java client.


Key highlights:

- Support for HTTP BASIC, HTTP DIGEST and Kerberos/SPNEGO authentication.

- Ability to retry recreating a connection if the server returns a 
NoSuchConnection exception.


Future work:

I'd like to get the driver to support HA mode in the future. Currently, 
it is possible to implement retrying and recreating a statement due to a 
missing_statement in an ExecuteResponse. However, due to the way the Go 
database/sql package is built, I am unable to do this with the 
missing_statement and missing_results in a FetchResponse. I've opened an 
issue regarding this on the Go issue tracker for those interested: 
https://github.com/golang/go/issues/21059


The current work around is to handle these failures in the client code 
that uses the database/sql package and avatica driver.


Cheers,

Francis



Re: Go Client v2.0.0 released

2017-07-19 Thread F21

Hey Julian and James,

There's currently a JIRA open for this (CALCITE-1240 [0]). Let's 
continue the discussion there


Francis

[0]: https://issues.apache.org/jira/browse/CALCITE-1240

On 20/07/2017 3:16 AM, Julian Hyde wrote:

I second that. Thank you, Francis!

I think it’s been said before, but if you were to contribute your Go client to 
Avatica (a sub-project of Calcite) we would gladly accept it. It wouldn’t get 
in the way of the development process but it would bring the client to a larger 
audience. Phoenix already consumes quite a few Calcite and Avatica libraries.

Julian




On Jul 18, 2017, at 10:15 PM, James Taylor <jamestay...@apache.org> wrote:

Awesome work, Francis! Would be great to get this into Avatica & Phoenix if
you're interested.

-- Forwarded message ------
From: F21 <f21.gro...@gmail.com>
Date: Tue, Jul 18, 2017 at 9:37 PM
Subject: Go Client v2.0.0 released
To: dev@calcite.apache.org


Hi all,

I just tagged v2.0.0 for the Go database/sql driver. This will be the only
supported version for Avatica 1.10.0 and Phoenix 4.11.0 on-wards due to the
backwards-incompatible change to fix CALCITE-1353 . For older versions of
Avatica and Phoenix, please use the v1.x.x series. For this release, I
believe the Go client is at feature-parity with the Java client.

Key highlights:

- Support for HTTP BASIC, HTTP DIGEST and Kerberos/SPNEGO authentication.

- Ability to retry recreating a connection if the server returns a
NoSuchConnection exception.

Future work:

I'd like to get the driver to support HA mode in the future. Currently, it
is possible to implement retrying and recreating a statement due to a
missing_statement in an ExecuteResponse. However, due to the way the Go
database/sql package is built, I am unable to do this with the
missing_statement and missing_results in a FetchResponse. I've opened an
issue regarding this on the Go issue tracker for those interested:
https://github.com/golang/go/issues/21059

The current work around is to handle these failures in the client code that
uses the database/sql package and avatica driver.

Cheers,

Francis





Re: Changes to first_frame_max_size in Avatica's ExecuteRequest

2017-07-19 Thread F21

Thanks, Josh!

I totally forgot protobuf fields are identified by their field numbers.

Francis

On 20/07/2017 4:32 AM, Josh Elser wrote:

Nope, compatibility is still there..

The name of attributes in protobuf messages are irrelevant. The number 
(3 or 5, in this case) are what guarantee correctness. Whether or not 
you provide the first_frame_max_size as id=3 or id=5, the server will 
parse the number accordingly and (should *wink*) do the right thing.


I am fairly positive I wrote some unit tests around this code.

On 7/17/17 5:57 AM, F21 wrote:
In CALCITE-1353, the first_frame_max_size was changed from a uint64 
to int32 because the value can be negative.


The change to the protobuf definitions can be seen here: 
https://github.com/apache/calcite-avatica/blob/master/core/src/main/protobuf/requests.proto#L132 



I've been making some updates to the Go driver and will be pulling in 
the new protobuf definitions from the calcite-avatica source tree to 
reflect this change.


 From what I can see, does this mean backwards compatibility is 
broken? Older clients will try to send a uint64 for the 
first_frame_max_size field, but the server is expecting a int32. 
Newer clients will send a int32, but an older server would be 
expecting a uint64.


What is the behavior if both deprecated_first_frame_max_size and 
first_frame_max_size are sent to the server?


In this case, I'll probably bump a new major release for the driver 
to communicate this backwards-incompatibility.


Francis






Re: Avatica and retrying in HA mode

2017-07-19 Thread F21

Hey Josh,

Thanks for the clarification!

Francis

On 20/07/2017 4:30 AM, Josh Elser wrote:



On 7/17/17 8:04 AM, F21 wrote:

I am investigating HA support for the Go Avatica driver.

Let's assume a scenario where we have multiple Avatica servers behind 
a load balancer and do not want to use sticky sessions.


Currently, these are some of the questions I have:

1. A connection with an id "123" opened. A server handles the request 
and then fails immediately after. We then call a 
PrepareAndExecuteRequest using the connection_id of "123". In this 
case, what happens? ExecuteResponse does not appear to have a field 
telling us the connection_id is invalid.


I don't recall the exact context, but there is information passed back 
to definitively know that either a connection or statement do not 
exist server-side. I can dig into the Java source if you're unable to 
find it :)


2. A connection with an id "456" is opened. A server handles the 
request and we call PrepareAndExecuteRequest. This returns a 
resultset. The server fails at this point. We then call a 
FetchRequest to fetch more rows, but the server has no record of this 
query. If missing_statement is true, we recreate the statement on a 
new connection. What is the expected corrective action if 
missing_results is true? I am assuming we use a SyncResultsRequest to 
sync the results, and if SyncResultResponse has missing_statement set 
to true, because the server failed, we recreate the statement and 
retry. Are there any cases where missing_statement is false but 
missing_results is true (what causes this scenario)? Also, how about 
missing_statement = true and missing_results = false?


I don't think that missing_statement=false/missing_results=true would 
ever happen as server failure would be the only possible way (the 
avatica server would have crashed and the statement would be lost 
anyways).


A missing statement without missing results may happen (same scenario 
as the first question). I believe it is a true statement that we would 
only be missing results if we were not missing a statement (as the 
results are created by the statement). Similarly, we would never have 
results if we were missing a statement.


If it helps to visualize in code, in JDBC:

  Connection conn = getConnection(conn_id); // missing_connection
  Statement stmt = getStatement(conn, stmt_id); // missing_statement
  ResultSet results = getResults(conn, stmt); // missing_results

We never get to the further point if we miss context earlier.

3. I noticed that with ExecuteRequest, FetchRequest and 
ExecuteBatchRequest, there is sufficient information in the 
statement_handle and other fields to allow Avatica to automatically 
recreate the statement or result set on the server if it is missing. 
Is there any reason why this is not being automatically handled by 
Avatica?


Presently, the server doesn't hold on to that information. For the HA 
case, the server you're currently talking to never saw the creation of 
a connection, statement, results. That's why it has to be client-driven.


We could try to make the Avatica server smart enough to re-create the 
objects when they were automatically expired (after 10mins: default), 
but the common case is that the previous server died and the new 
server just doesn't know.



Cheers,
Francis





Re: Is Avatica's ResultSetResponse's Signature field always present?

2017-07-11 Thread F21
Thanks for clarifying! I've opened 
https://github.com/apache/calcite-avatica/pull/10 to make this clearer.


Cheers,
Francis

On 12/07/2017 1:28 AM, Josh Elser wrote:
There's one point I want to bring up first about "optional" fields. 
Every attribute on Avatica's messages are (should be) listed as 
optional. This is how we correctly handle a "drift" in the protocol 
itself. If we have fields marked as required, we would never be able 
to change them which may cause problems.


It would probably be good to work towards tying docs to a specific 
version so we can remove this ambiguity :)


To answer your question, no, there will be no Signature for 
INSERT/UPSERT operations (any operation which returns a number of rows 
affected instead of a ResultSet). For SQL which generate a ResultSet 
(some rows of data), the Signature would "always" be provided.


On 7/11/17 4:38 AM, F21 wrote:
I have a bug report for the Go Avatica driver where someone executed 
an `UPSERT` statement and caused the driver to crash. See 
https://github.com/Boostport/avatica/issues/34


The driver crashed, because we tried to read 
`ResultSetResponse.Signature` and it was null as the statement was an 
upsert statement.


According to the protobuf documentation [0], signature is 
non-optional and should always be present. Does this guarantee extend 
to data modification statements like UPSERT?


Cheers,

Francis


[0] 
https://calcite.apache.org/avatica/docs/protobuf_reference.html#resultsetresponse 







[GitHub] calcite-avatica-go pull request #1: WIP: Initial import of the avatica-go cl...

2017-08-08 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/1

WIP: Initial import of the avatica-go client

cc @joshelser @julianhyde

Todo:
- [x] Switch imports from github.com/Boostport/avatica to 
github.com/apache/calcite-avatica-go
- [ ] Update files to include Apache headers
- [ ] Create release history page (not sure what this one is @julianhyde)
- [ ] Switch CI to Apache Jenkins
- [ ] Use self-hosted coverage reporting tool (maybe jenkins has a plugin?)
- [ ] Update URL in the awesome-go list
- [ ] Update driver description on Avatica website

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go initial-import

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/1.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1


commit c66ae9dc455d9deb532fe7839903be052ddaba17
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-16T00:51:50Z

Initial commit

commit c9263b2a28bbd993de1db86297e93a525cd518a1
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-16T02:29:59Z

Make docker entrypoint executable.

commit 0912c74b3477fb6dcf04b5052b691e9ae39e580c
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-16T03:07:54Z

Fix AVATICA_HOST in wercker.yml.

commit deda32881fd1e2ff7eed7d22c610e4a626a6ff53
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-16T03:11:06Z

Fix AVATICA_HOST environment variable in wercker.yml.

commit b3af95f12416d62cc8a7b3b932e8d7ab06fa24f3
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-16T03:19:24Z

Use govendor to install dependencies in wercker.

commit ec158e73f9626395aaa39a58485b74cd70281f6e
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-16T04:45:37Z

Increase wercker timeouts.

commit ffb99c5e3857c7fe61289be0502dc7a2902be445
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-17T00:51:27Z

Move hbase-phoenix container to separate repository.

commit 8dc7cb27bd23c624e94af369604624973804c69d
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-17T00:53:58Z

Add information on updating the protocol buffer definitions to the README.

commit c97990796275b63d9d6a2e2b3b5968e83e629c58
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-17T00:57:53Z

Use hbase-phoenix all-in-one image with moby.

commit f280cb769453c3e2e218eab3b5d518b13536852b
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-17T00:59:13Z

Set version for hbase image in moby.yml.

commit 4b3e21f9b45abf4b136526fe338de29358bd0509
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-17T01:00:30Z

Use golang alpine container.

commit 57cd6958f5624e561dd815f6254c750ac8ae3402
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-17T01:09:20Z

Add badges and license info.

commit 8e9ae57f1cc52b0b97720d091b90bd99febe0250
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-17T01:20:59Z

Use sh for alpine builds.

commit 99b800bc11ee5e4d1a2c26eb595baaf152b4bf2c
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-17T01:23:48Z

Install git for alpine build.

commit 43b69c584c1c3daf377a6800e6f76961e7b8644b
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-17T01:49:15Z

Fixed AVATICA_HOST environment variable in wercker.yml.

commit e0e66000cd565f1664993fbcb93e2948fcbd3be3
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-17T05:42:18Z

Update README.

commit 5cc034cabd7981acfe0f8e3ea2468336148f9bb8
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-17T05:50:31Z

Update package description.

commit 8bdec2716c03a1e730eb9e0dc3142b945340aed2
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-18T23:08:23Z

Fix golint errors.

commit a07f22dbdcbc44f184819438083a74742ff6350c
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-29T03:41:51Z

Add coverage information using coveralls.io.

commit b39d0a6b4fde414233f55a538a224f45e31aa6fa
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-29T12:22:29Z

Point coveralls badge to master.

commit 118872b450937983be3bf54c7a4d822d49139db8
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-30T00:34:29Z

Use set as covermode when generate coverage report.

commit 22a67b79ea1f6d4bdf7530be7002d58c5afe5fb7
Author: Francis Chuang <francis.chu...@gmail.com>
Date:   2016-05-30T06:17:51Z

Add test to check that LastInsertId() returns an error.

commit ae973c57180ea9

Avatica and Authentication

2017-05-15 Thread F21

Hey guys,

I recently received a request to add authentication to the Go Avatica 
driver.


I am currently investigating ways to implement this. One of the 
limitations of the Go database/sql package is that all configuration 
options need to be passed in through the connection string (DSN). For 
the Go driver, I currently have the following:


http://address:port[/schema][?parameter1=value&...parameterN=value]

Parts in between "[" and "]"  are optional. Valid parameters are 
location, maxRowsTotal, frameMaxSize and transactionIsolation.


Does the Java Avatica thin client support HTTP Basic and Digest 
Authentication? After reading the docs on Security[0], I get the 
impression that it is only useful for gating access to the Avatica 
server. If using HTTP Basic and Digest auth, is the role required in the 
client request? If so, what is the HTTP header for sending the role to 
the server? I am also trying to assess to see if HTTP Basic and Digest 
authentication is worth implementing.


I think SPNEGO/Kerberos auth is probably the best form of authentication 
and is used by Phoenix/HBase and quite a lot of other database backends. 
However, the downside is that there is no usable pure-go SPNEGO/Kerberos 
library as the only one that exists is still lacking a lot of features. 
There is a fork of jmckaskill/gokerb[1] which is more up-to-date but 
doesn't look to be actively maintained.


The only option to implement SPNEGO at this point is to use a library 
that wraps libgssapi, for example apcera/gssapi[2]. However, the 
downsides of using something like this is that it rely on cgo, which 
makes it harder to debug and will make the Go driver unusable on Windows.


I also noticed that the JDBC driver has support for user and password 
parameters which are passed directly to the underlying server. If I want 
to build support for this, how should the user and password be sent to 
avatica? Should I include a `user` and `password` key in the info map 
within OpenConnectionRequest[3]?


Cheers,

Francis


[0] 
http://calcite.apache.org/avatica/docs/security.html#http-basic-authentication


[1] https://github.com/jgcallero/gokerb

[2] https://github.com/apcera/gssapi

[3] 
http://calcite.apache.org/avatica/docs/protobuf_reference.html#openconnectionrequest




Re: Avatica and Authentication

2017-05-25 Thread F21

Hey Josh,

Thanks, that was super useful. I've now implemented HTTP Basic and 
Digest Auth and passing the JDBC username/password to the backing DB. I 
want to spin up an Avatica HSQLDB server and have built the docker 
images. Where should the `jetty-users.properties` file be mounted so 
that the Avatica server can see it?


Francis

On 20/05/2017 12:53 AM, Josh Elser wrote:



F21 wrote:

Just realized the proposed design does not take into account the
situation where someone wants to do HTTP Basic or Digest auth against
avatica, and then pass a separate username/password pair to the 
backing db.


On 17/05/2017 9:24 AM, F21 wrote:

Hi Josh,

Thanks for the detailed response!

In terms of the HashLoginService in Jetty, does the client need to
pass a role to Avatica? If so, how is this done?
Also, if I define multiple roles for a given user in Avatica using a
properties file, do they have any actual effect (from what I can see,
Avatica does not seem to use the roles at all)?


No, the user doesn't have a concept of roles. The roles are a 
Jetty-construct to map a set of users that are logically grouped 
together. e.g. an "admin" role may be authorized to see resources that 
a "developers" role may not be authorized.



Here's my summary of the possible authentication methods (let me know
if I miss something):
- HTTP Basic or HTTP Digest authentication against Avatica.
- SPNEGO against Avatica.
- SPNEGO against Avatica that is impersonated, so that the queries
against the database are run against the authenticated user.
- A user and password pair (`user` and `password` keys in the
OpenConnectionRequest map) that is passed straight down to the backing
database (Avatica does not do any authentication).

I am still somewhat confused about using `avatica_user` and
`avatica_password` for HTTP Basic and Digest auth. I am assuming that
instead of passing those as a HTTP header (Authorization: Basic
QWxhZGRpbjpPcGVuU2VzYW1l), I should
set them in the map for OpenConnectionRequest and set the
`authentication` key to either `BASIC` or `DIGEST`?


No: avatica_user and avatica_password are implementation details for 
the Avatica JDBC driver only. *Each driver* needs to implement hooks 
for how the HTTP authentication is implemented (if you choose to do 
so). Specifically, you would need to expose configuration in the Go 
driver to accept username and password to specifically use for the 
HTTP requests.


It's certainly a reasonable idea to re-use the same naming as it keeps 
things concise across client implementations :)



Is passing the user and password pair using `user` and `password`
straight down to the backing database an officially supported method?
That's what was requested with the Go driver. In this instance, they
are using the Phoenix Query Server, but I believe the PQS was modified
to check username/password pairs, as I don't believe Phoenix/HBase
supports username/password auth.


Ok, I think I see where your confusion is coming from.

"user" and "password" are officially supported as they are constructs 
from JDBC. There is the implicit assumption that these are present 
(unless your "real backend database" doesn't support them).


Avatica layers more authentication on "top" of that. The big 
difference is that HTTP Basic/Digest authentication and SPNEGO 
authentication are done at the "protocol" level. The protocol 
authentication we're getting isn't directly translated into backend 
Avatica RPCs.


Implementations of Avatica *can* make an exception to that rule. For 
example, with Apache Phoenix, we hook into the authenticated user from 
the SPNEGO request and Avatica "impersonates" the end-user (as we know 
that the request was strongly authenticated with their Kerberos 
credentials already). Phoenix/HBase uses that impersonated Kerberos 
user _in lieu_ of the normal JDBC "user" and "password".


Does this make sense? The core is that we have two separate layers of 
authentication but you can wire them together in the backend.



For SPNEGO impersonation, is the request automatically impersonated as
long as the backing db has implemented impersonation?


No. In the server-side portion of a SPNEGO handshake, the server 
*never* sees the client's "keys". It's impossible for Avatica to 
perform some action as the end user.


How this can work is that Avatica is configured to use its server 
identity but *say that it is the client*. e.g. "I am  here are 
my credentials: ". The reason this can work is if the 
backend DB is configured to allow the  to be treated as 
. In any other case, this is a security flaw.



In terms of implementation, my current difficulties are:
- Lack of a good pure-go SPNEGO library: Maybe it's possible to
support SPNEGO on *nix-like systems only, while throwing an error if
SPNEGO is used on Windows.


That's a shame. Maybe one will come about if we wait :)

Avatica images on docker hub

2017-05-25 Thread F21
I have implemented HTTP authentication for the Go driver and want to 
test it against avatica to check that it works correctly.


I believe that avatica has a TCK toolkit that contains a built in 
database backend using HSQLDB. However the repo on docker hub appears to 
be empty: https://hub.docker.com/r/apache/calcite-avatica/


This is what happens when I try to pull the image:

$ docker pull apache/calcite-avatica
Using default tag: latest
Error response from daemon: manifest for apache/calcite-avatica:latest 
not found


Is there any chance the image could be pushed? (Will probably save a lot 
of time for most people :) )


Cheers,

Francis



Re: Towards Avatica 1.11 and Avatica-Go version ?.?

2017-11-27 Thread F21

Here's my update for avatica-go:

I am currently still accepting PRs and pushing fixes to the original 
boostport/avatica repository as there is not a release for Avatica-Go 
yet. Changes to boostport/avatica are currently cherry-picked into 
Avatica-Go.


In regards to the work done on Avatica-Go:
- Most of the pieces are already in place.
- The documentation for the website is in the _site directory on a 
branch, but it might need some updating to bring it up to date.
- Need to make sure the website documentation is in the right format and 
directory structure so that it can be pulled in to calcite-avatica and 
built together.

- The tests are currently passing against Apache Phoenix.
- Unfortunately, the tests against Avatica HSQLDB are still failing due 
to CALCITE-2013, CALCITE-1957, CALCITE-1951 and CALCITE-1950.
- There is some phoenix specific code that parses the error message to 
get the error code and the error constant. This is obviously quite 
brittle and will break against other DBs, so it would be nice if avatica 
could return these pieces to us directly (have not checked if there's a 
JIRA for this).


I don't think I'm ready to release a version of Avatica-Go before we can 
get the tests passing against Avatica HSQLDB, but I think I should be 
able to make significant progress with this once a new Avatica with 
updated deps is released.


Francis

On 28/11/2017 8:14 AM, Julian Hyde wrote:

It’s been a while since we released Avatica. At the very least we could use a 
new version to upgrade its dependencies.

Also, what about a release of Avatica-Go? I don’t think we have done an Apache 
release yet.

Julian





[GitHub] calcite-avatica-go pull request #7: Fix execute response with empty result s...

2017-12-14 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/7

Fix execute response with empty result set



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go 
fix-execute-response-with-empty-result-set

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/7.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #7


commit ff983fb33b148c14d96d8e330e542cb63120c8d4
Author: Francis Chuang <francis.chu...@boostport.com>
Date:   2017-12-14T10:10:41Z

Check size of ExecuteResponse.Results before using it because under some 
conditions, the result set can be empty

commit e3c045c78f6a1706f0a3cb4ef551ebd557d0510e
Author: Francis Chuang <francis.chu...@boostport.com>
Date:   2017-12-14T10:18:07Z

Remove empty slice declaration via literals




---


[GitHub] calcite-avatica-go pull request #5: Merge upstream updates

2017-11-13 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/5

Merge upstream updates



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go 
merge-upstream-updates

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/5.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5


commit 06d33a3256ba35ae884a7ec637b00f7794dfbd0e
Author: Francis Chuang <francis.chu...@boostport.com>
Date:   2017-11-13T23:36:00Z

Update dependencies

commit f6400276e20cd8a084254aaf7193fcf2c88d3c7a
Author: Francis Chuang <francis.chu...@boostport.com>
Date:   2017-11-13T23:58:18Z

Remove kerberos session renewal because gokrb5 now performs renewals by 
default

commit 05bc78eec0a5778f7b2f78ebd2f0481eb640b946
Author: Francis Chuang <francis.chu...@boostport.com>
Date:   2017-11-13T23:58:40Z

Bump Phoenix to 4.13




---


[GitHub] calcite-avatica-go pull request #6: Bump gokrb5 client to v2

2017-11-18 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/6

Bump gokrb5 client to v2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go 
update-from-upstream

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/6.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6


commit 4d019103b9efb92e0039f9c1b79d7497b551e14b
Author: Francis Chuang <francis.chu...@boostport.com>
Date:   2017-11-18T23:01:28Z

Bump gokrb5 client to v2




---


Re: query about Clob Blob sql support of avatica and calcite

2017-12-10 Thread F21

Hi Victor,

Are you using Avatica with HSQLDB? CLOB is current unsupported by 
Avatica. Please see CALCITE-1957[0] to track this issue. Contributions 
are very welcomed, if you are able to contribute this feature! :)


Cheers,
Francis

On 10/12/2017 7:13 PM, victor wrote:

Hi, calcite development team:
Recently I do testing for avatica , found it did not support Clob Blob 
sql query , is it true? or some wrong with my testing process?  if it is true, 
is there some plan for fixing it , and how is the plan?please let me know as 
soon as possible,
thanks a lot,


yours  best regards


victor lv





[GitHub] calcite-avatica-go issue #24: Change UUID package to github.com/hashicorp/go...

2018-06-15 Thread F21
Github user F21 commented on the issue:

https://github.com/apache/calcite-avatica-go/pull/24
  
@kenshaw Thanks! I'll merge it when the test turns green. As the project is 
now part of the Apache Foundation, I will need to start a vote on the mailing 
list in order to tag a release. How urgent is your dependency on this fix?


---


[GitHub] calcite-avatica-go issue #24: Change UUID package to github.com/hashicorp/go...

2018-06-15 Thread F21
Github user F21 commented on the issue:

https://github.com/apache/calcite-avatica-go/pull/24
  
@kenshaw, thanks that looks great! As you're not a committer of Apache 
Calcite, can you please include your name in parenthesis at the end of the 
commit message?


---


[GitHub] calcite-avatica-go issue #24: Change UUID package to github.com/hashicorp/go...

2018-06-15 Thread F21
Github user F21 commented on the issue:

https://github.com/apache/calcite-avatica-go/pull/24
  
Thanks for the heads up, I'll make sure to include the `v` in the tag for 
the next release. As the fix is not super urgent, I'll wait for a few more 
commits before starting a vote for a release.


---


[GitHub] calcite-avatica-go issue #24: Change UUID package to github.com/hashicorp/go...

2018-06-15 Thread F21
Github user F21 commented on the issue:

https://github.com/apache/calcite-avatica-go/pull/24
  
@kenshaw Thanks, this is a good idea! As this project is now part of the 
Apache Calcite project and the change is not trivial, can you please do the 
following:
- Open an issue in JIRA and set the component to `avatica-go`: 
https://issues.apache.org/jira/projects/CALCITE/issues
- Update your commit message to reflect the JIRA issue number and include 
your name, ex: `[CALCITE-1234] Change UUID package to 
github.com/hashicorp/go-uuid (Ken Shaw)`

Thanks! 😄 


---


[GitHub] calcite-avatica-go issue #24: Change UUID package to github.com/hashicorp/go...

2018-06-15 Thread F21
Github user F21 commented on the issue:

https://github.com/apache/calcite-avatica-go/pull/24
  
@kenshaw Can you use `(` instead of `[`? Sorry for being picky, but these 
are the standards being set by Apache Calcite.


---


[GitHub] calcite-avatica-go pull request #8: Bump dependencies

2018-02-05 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/8

Bump dependencies



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go bump-deps

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/8.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #8


commit 245c620a3d108aca44c90837daaa09e99080f3cd
Author: Francis Chuang <francis.chuang@...>
Date:   2018-02-05T22:31:46Z

Bump dependencies




---


[GitHub] calcite-avatica-go pull request #25: [CALCITE-2372] Add Phoenix 4.14.0 for t...

2018-06-21 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/25

[CALCITE-2372] Add Phoenix 4.14.0 for testing and turn travis.yml into a 
build matrix



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go 
add-phoenix-4.14.0-for-testing

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/25.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #25


commit d661be75564d5dd89e4687d54659ecdbd28b574b
Author: Francis Chuang 
Date:   2018-06-22T00:12:45Z

[CALCITE-2372] Add Phoenix 4.14.0 for testing and turn travis.yml into a 
build matrix




---


[GitHub] calcite-avatica-go pull request #27: [CALCITE-2335] Add support for Go modul...

2018-08-28 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/27

[CALCITE-2335] Add support for Go modules and test against Go 1.11



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go go-modules

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/27.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #27


commit e589422c7b0c6fc3eb133a062593d20b5d6802d6
Author: Francis Chuang 
Date:   2018-08-28T10:25:20Z

[CALCITE-2335] Add support for Go modules and test against Go 1.11




---


[GitHub] calcite-avatica-go pull request #28: [CALCITE-2335] Set module version to v3

2018-08-28 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/28

[CALCITE-2335] Set module version to v3



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go 
fix-go-module-version

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/28.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #28


commit eb4d11c77d462c59c4376381972b4b86aba5037a
Author: Francis Chuang 
Date:   2018-08-28T23:06:16Z

[CALCITE-2335] Set module version to v3




---


[GitHub] calcite-avatica-go pull request #29: [CALCITE-2500] Test against Avatica 1.1...

2018-08-28 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/29

[CALCITE-2500] Test against Avatica 1.12.0 and Phoenix 5.0.0



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go 
bump-avatica-and-phoenix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/29.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #29


commit 49fd3a4c5908ef5151406166de1faadc92ad820f
Author: Francis Chuang 
Date:   2018-08-29T04:01:05Z

[CALCITE-2500] Test against Avatica 1.12.0 and Phoenix 5.0.0




---


[GitHub] calcite-avatica-go pull request #26: [CALCITE-2493] Update dependencies

2018-08-28 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/26

[CALCITE-2493] Update dependencies



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go 
update-dependencies

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/26.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #26


commit 334bc15f92dd5b719b0f58ca8873f6103085984c
Author: Francis Chuang 
Date:   2018-08-28T06:10:53Z

[CALCITE-2493] Update dependencies




---


[GitHub] calcite-avatica-go pull request #32: [CALCITE-2547] Update dependencies

2018-09-10 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/32

[CALCITE-2547] Update dependencies



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go 
update-dependencies

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/32.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #32


commit ac8581f94091e09bc4f9b9e8c6d64a04bd0c9f80
Author: Francis Chuang 
Date:   2018-09-10T12:11:19Z

[CALCITE-2547] Update dependencies




---


[GitHub] calcite-avatica-go pull request #30: [CALCITE-2545] Fix import self-referent...

2018-09-10 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/30

[CALCITE-2545] Fix import self-referential import paths to point to v3 of 
calcite-avatica-go



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go fix-import-paths

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/30.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #30


commit 351a1d0f92753bbc7b58afd50dbd0a9e0553d9d2
Author: Francis Chuang 
Date:   2018-09-10T11:12:43Z

[CALCITE-2545] Fix import self-referential import paths to point to v3 of
calcite-avatica-go




---


[GitHub] calcite-avatica-go pull request #31: [CALCITE-2544] Replace golang.org/x/net...

2018-09-10 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/31

[CALCITE-2544] Replace golang.org/x/net/context with stdlib context



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go 
remove-x/net/context

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/31.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #31


commit c016e6440bed51057e83d5178c769f582d685e11
Author: Francis Chuang 
Date:   2018-09-10T11:45:17Z

[CALCITE-2544] Replace golang.org/x/net/context with stdlib context




---


Re: Towards Avatica-Go release ?.?

2018-04-09 Thread F21
I am wrapping up some things today and plan to test Avatica Go against 
the latest version of Avatica.


I think we'll be able to make a release by the end of the week.

I am happy to be the release manager for this one.

The latest version of Avatica is 2.3.1 under Boostport/avatica, I think 
we should make this release 2.4.0.


Francis

On 10/04/2018 3:49 AM, Julian Hyde wrote:

We have to make a release of Avatica Go soon (like, within the next month).

As I’ve said previously, a tar-ball of the source (plus checksums/signatures 
and release notes) is sufficient. But we are an Apache project, and projects 
must make releases.

Can someone please volunteer to be release manager? I am too busy.

What should the version number be?

Julian





[GitHub] calcite-avatica-go pull request #19: Fix readme

2018-04-16 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/19

Fix readme



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go fix-readme

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/19.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #19


commit d31aa931ba548643337c7a537f08ef8e555c16c8
Author: Francis Chuang <francischuang@...>
Date:   2018-04-17T05:31:35Z

Fix readme




---


[GitHub] calcite-avatica-go pull request #22: Check files for Apache license header i...

2018-04-20 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/22

Check files for Apache license header in make-release-artifacts script

cc @julianhyde 

One quick question:

In cb2d4cb4596d5850bd0eb10c9c7697b679aabc2d, the script was updated to 
strip `-rc` identifiers from the tag name. Unfortunately, this means that if we 
only have a tag such as `3.0.0-rc1`, the tag won't be checked out. Should we 
revert it back to just: `latestTag=$(git describe --tags `git rev-list --tags 
--max-count=1`)` ?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go 
check-files-for-apache-license

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/22.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #22


commit f910c0eaee14177a8c1f1252df2346ef6a587301
Author: Francis Chuang <francischuang@...>
Date:   2018-04-21T02:59:54Z

Check files for Apache license header




---


[GitHub] calcite-avatica-go pull request #12: [CALCITE-2258] Add .travis.yml

2018-04-15 Thread F21
Github user F21 commented on a diff in the pull request:

https://github.com/apache/calcite-avatica-go/pull/12#discussion_r181600746
  
--- Diff: .travis.yml ---
@@ -0,0 +1,54 @@
+# Configuration file for Travis continuous integration.
+# See https://travis-ci.org/apache/calcite-avatica-go
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+language: go
+
+branches:
+  only:
+- master
+- /^branch-.*$/
--- End diff --

I think instead of these regexes, we can just use `/.*/


---


[GitHub] calcite-avatica-go pull request #12: [CALCITE-2258] Add .travis.yml

2018-04-15 Thread F21
Github user F21 commented on a diff in the pull request:

https://github.com/apache/calcite-avatica-go/pull/12#discussion_r181600756
  
--- Diff: .travis.yml ---
@@ -0,0 +1,54 @@
+# Configuration file for Travis continuous integration.
+# See https://travis-ci.org/apache/calcite-avatica-go
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+language: go
+
+branches:
+  only:
+- master
+- /^branch-.*$/
+- /^[0-9]+-.*$/
+
+go:
+  - "1.10.x"
+
+sudo: required
+services:
+  - docker
+
+env:
+  global:
+  - AVATICA_IMAGE=boostport/hbase-phoenix-all-in-one:1.3-4.13
+  - AVATICA_HOST=http://localhost:8765
+
+before_install:
+  - go get -u github.com/golang/dep/cmd/dep
+  - dep ensure -v
+  - docker pull $AVATICA_IMAGE
+  - docker run -d -p 8765:8765 $AVATICA_IMAGE
+  - docker ps -a
+
+install:
+  - go build
+
+script:
+  - go test -cover -v $(go list ./... | grep -v /vendor/)
--- End diff --

We can now use `go test -cover -v ./...` because newer versions of Go will 
no longer test repos in `/vendor/`.


---


[GitHub] calcite-avatica-go pull request #13: Add HSQLDB support and move phoenix sup...

2018-04-15 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/13

Add HSQLDB support and move phoenix support into an adapter



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go 
avatica-hsqldb-support

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/13.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #13


commit 2e931c643d5a4b46c17dd52b28d5a0db3133d0a3
Author: Francis Chuang <francis.chuang@...>
Date:   2018-04-15T04:27:20Z

Add HSQLDB support and move phoenix support into an adapter




---


[GitHub] calcite-avatica-go pull request #13: Add HSQLDB support and move phoenix sup...

2018-04-15 Thread F21
Github user F21 commented on a diff in the pull request:

https://github.com/apache/calcite-avatica-go/pull/13#discussion_r181619896
  
--- Diff: phoenix/phoenix.go ---
@@ -1,82 +1,105 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package avatica
+package phoenix
--- End diff --

Good catch! thanks!


---


Re: Towards Avatica-Go release ?.?

2018-04-16 Thread F21

Hey Julian,

The code is in a releasable state. A few questions:
- Building a binary of a library in Go is not useful/meaningful. For the 
release, we just need to tar gz the git repo and sign it. Do you still 
need a script for this? Otherwise we can write the instructions 
somewhere in the site/ directory.


- Still need to write the release notes. Can you have a look at 
https://github.com/apache/calcite-avatica-go/pull/2 to see if I have 
structured the site/ directory correctly? The PR is a bit stale, but it 
shouldn't be too much work to get it up to date.


Francis

On 10/04/2018 8:51 AM, Julian Hyde wrote:

Thanks, Francis.

The most important step is to come up with a release vote email with the same items 
as [1]: release notes, git commit, artifacts to be voted on in 
dist.apache.org/repos/dist/dev <http://dist.apache.org/repos/dist/dev>, md5 and 
sha256 hashes. (The staged maven repository does not apply.)

Per apache policy, the release needs to be signed by a PMC member. I’ll be 
happy to do that. Or we could skip signing for the first couple of RCs.

Maybe write a shell script that creates the files, and a “howto” that the next 
RM can follow? I’ll be able to run the script when it’s time to create signed 
artifacts.

Julian

[1] 
https://lists.apache.org/thread.html/03b49fbed8617e860f71bc4f80abe411451d5f112beb5837cb9e5367@%3Cdev.calcite.apache.org%3E
 
<https://lists.apache.org/thread.html/03b49fbed8617e860f71bc4f80abe411451d5f112beb5837cb9e5367@%3Cdev.calcite.apache.org%3E>


On Apr 9, 2018, at 3:26 PM, F21 <f21.gro...@gmail.com> wrote:

I am wrapping up some things today and plan to test Avatica Go against the 
latest version of Avatica.

I think we'll be able to make a release by the end of the week.

I am happy to be the release manager for this one.

The latest version of Avatica is 2.3.1 under Boostport/avatica, I think we 
should make this release 2.4.0.

Francis

On 10/04/2018 3:49 AM, Julian Hyde wrote:

We have to make a release of Avatica Go soon (like, within the next month).

As I’ve said previously, a tar-ball of the source (plus checksums/signatures 
and release notes) is sufficient. But we are an Apache project, and projects 
must make releases.

Can someone please volunteer to be release manager? I am too busy.

What should the version number be?

Julian







[GitHub] calcite-avatica-go pull request #12: [CALCITE-2258] Add .travis.yml

2018-04-15 Thread F21
Github user F21 commented on a diff in the pull request:

https://github.com/apache/calcite-avatica-go/pull/12#discussion_r181611670
  
--- Diff: .travis.yml ---
@@ -0,0 +1,54 @@
+# Configuration file for Travis continuous integration.
+# See https://travis-ci.org/apache/calcite-avatica-go
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+language: go
+
+branches:
+  only:
+- master
+- /^branch-.*$/
--- End diff --

Ah, I see. Let's keep this the way it is then.


---


[GitHub] calcite-avatica-go pull request #14: Clean up readme and remove wercker

2018-04-15 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/14

Clean up readme and remove wercker



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go clean-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/14.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14


commit 6378c7f298bc47703eed070fb0eadb03a530d827
Author: Francis Chuang <francis.chuang@...>
Date:   2018-04-16T05:11:02Z

Clean up readme and remove wercker




---


[GitHub] calcite-avatica-go pull request #21: Replace gopher.png with Calcite logo an...

2018-04-19 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/21

Replace gopher.png with Calcite logo and uncomment HSQLDB transaction tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go remove-gopher-png

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/21.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #21


commit 4fcb13121fc614a60897f9a5743aad3a54b4c31b
Author: Francis Chuang <francischuang@...>
Date:   2018-04-20T05:30:43Z

Replace gopher.png with calcite logo

commit 7156d8219644c5740aad59984f150b4f47f189e0
Author: Francis Chuang <francischuang@...>
Date:   2018-04-20T05:31:15Z

Enable HSQLDB transaction tests




---


[GitHub] calcite-avatica-go pull request #23: Test against Go 1.9

2018-04-22 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/23

Test against Go 1.9



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go go-1.9

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/23.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #23


commit 542caac791e967f6ddeb73c379f2f6098cfc64cd
Author: Francis Chuang <francischuang@...>
Date:   2018-04-23T01:39:43Z

Test against Go 1.9




---


[GitHub] calcite-avatica-go pull request #18: Fix link in release history

2018-04-16 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/18

Fix link in release history



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go fix-link

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/18.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #18


commit 2015238505df44c2cd7a96f12ae258a185ab6fa2
Author: Francis Chuang <francischuang@...>
Date:   2018-04-17T04:48:41Z

Fix link in release history




---


[GitHub] calcite-avatica-go pull request #15: Add script to generate release artifact...

2018-04-16 Thread F21
Github user F21 commented on a diff in the pull request:

https://github.com/apache/calcite-avatica-go/pull/15#discussion_r181917109
  
--- Diff: make-release-artifacts.sh ---
@@ -0,0 +1,26 @@
+# Clean dist directory
+rm -rf dist
+mkdir -p dist
+
+# Get new tags from remote
+git fetch --tags
+
+# Get latest tag name
+latestTag=$(git describe --tags `git rev-list --tags --max-count=1`)
+
+# Checkout latest tag
+git checkout $latestTag
+
+# Make tar
+tar -zcvf dist/calcite-avatica-go-src-$latestTag.tar.gz --transform 
"s/^\./calcite-avatica-go-src-$latestTag/g" --exclude "dist" .
+
+cd dist
+
+# Calculate MD5
+gpg --print-md MD5 calcite-avatica-go-src-$latestTag.tar.gz > 
calcite-avatica-go-src-$latestTag.tar.gz.md5
+
+# Calculate SHA256
+gpg --print-md SHA256 calcite-avatica-go-src-$latestTag.tar.gz > 
calcite-avatica-go-src-$latestTag.tar.gz.sha512
--- End diff --

Good catch.


---


[GitHub] calcite-avatica-go pull request #16: Remove go-cleanhttp

2018-04-16 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/16

Remove go-cleanhttp



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go remove-cleanhttp

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/16.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #16


commit f2f4ba03e3f4dee333ec13d050b2336c391ca835
Author: Francis Chuang <francischuang@...>
Date:   2018-04-17T00:41:59Z

Remove go-cleanhttp




---


[GitHub] calcite-avatica-go pull request #15: Add script to generate release artifact...

2018-04-16 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/15

Add script to generate release artifacts



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go 
release-artifacts-script

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/15.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #15


commit 2e6d4fdf62bd8d96bbf0e9296a52f0354c32d973
Author: Francis Chuang <francischuang@...>
Date:   2018-04-16T23:37:57Z

Add script to generate release artifacts




---


[GitHub] calcite-avatica-go pull request #16: Remove go-cleanhttp

2018-04-16 Thread F21
Github user F21 closed the pull request at:

https://github.com/apache/calcite-avatica-go/pull/16


---


[GitHub] calcite-avatica-go pull request #17: Update dependencies

2018-04-16 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/17

Update dependencies



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go update-deps

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/17.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #17


commit c7a5668afc3aa51f3af538417478d72b537e5b22
Author: Francis Chuang <francischuang@...>
Date:   2018-04-17T00:59:19Z

Update dependencies




---


[GitHub] calcite-avatica-go pull request #15: Add script to generate release artifact...

2018-04-16 Thread F21
Github user F21 commented on a diff in the pull request:

https://github.com/apache/calcite-avatica-go/pull/15#discussion_r181917310
  
--- Diff: make-release-artifacts.sh ---
@@ -0,0 +1,26 @@
+# Clean dist directory
+rm -rf dist
+mkdir -p dist
+
+# Get new tags from remote
+git fetch --tags
+
+# Get latest tag name
+latestTag=$(git describe --tags `git rev-list --tags --max-count=1`)
--- End diff --

To keep the script simple, it will only release the latest tag. In most 
cases, we only need to sign and release each tag once. If required in the 
future, we can include functionality to allow tag selection, but that's 
gold-plating at the moment in my opinion.


---


[GitHub] calcite-avatica-go pull request #17: Update dependencies

2018-04-16 Thread F21
Github user F21 closed the pull request at:

https://github.com/apache/calcite-avatica-go/pull/17


---


[GitHub] calcite-avatica-go pull request #20: Documentation fixes

2018-04-18 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/20

Documentation fixes

This PR contains fixes to the documentation and website of 
calcite-avatica-go as they are identified during the voting process.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Boostport/calcite-avatica-go fix-documentation

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/20.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #20


commit 46820ab26e466e8271f2fe7782b40227f1f9c746
Author: Francis Chuang <francischuang@...>
Date:   2018-04-18T22:34:57Z

Fix development documentation




---


Re: [DISCUSS] Committer duties

2018-03-27 Thread F21

Hey everyone,

I am happy to take ownership of the Go avatica client. I am currently 
quite busy, but I hope to test it against the latest version of avatica 
released a couple of weeks ago and see if we can make a release for it.


Francis

On 28/03/2018 6:27 AM, Shuyi Chen wrote:

Hi Julian and Michael,

Thanks a lot for starting the discussion. I think the ownership model is a
good idea, and has been used by other open source communities, and we can
further break down core into e.g. sql parser, sql validator, relational
algebra, planner, JSON model, runtime and etc,.  Also, we need to add the
'server' module into the JIRA component list for DDLs. And I think adding
component in the PR title will help owner to filter and identify issues
quickly, also I think we can use a template to enforce a more detail PR
description, so the reviewer can better understand the context and review
the code.

I have some knowledge in sql parser, JSON model, relational algebra and
planner, and is currently working on the server module to add the
type/library/function DDLs. I can definitely help on answering questions on
mailing list, reviewing code and contributing PRs for these components.
Also, I am definitely interested in learning and helping more on committing
code and doing releases as well.

Cheers
Shuyi


On Tue, Mar 27, 2018 at 9:51 AM, Michael Mior  wrote:


Thanks for starting the discussion Julian. I suggested at some point in the
past that we figure out people who are willing to take ownership over
certain components of Calcite. It seems like this would at least be a start
to staying on top of PRs and issues. However, we would probably have to
segment core practically for this to help.

Another thing that comes to mind is staying on top of updates to
dependencies. If people are owning certain components, hopefully they would
also be willing to do a quick check around release time to see if new
versions of dependencies for that component have been released and test and
update if possible.

Then there's also more administrative tasks such as making releases and
ensuring a good flow of new committers and PMC members. Anything else I'm
missing?

Cheers,
--
Michael Mior
mm...@apache.org

2018-03-27 12:40 GMT-04:00 Julian Hyde :


I’m not working full-time on Calcite anymore. But this project still

needs

regular — daily — work to stay on top of contributions. If there’s only

one

person doing the work then one will is likely to become zero.

Let’s come up with a plan — with some commitments — for how this work

will

get done.

Julian









[GitHub] calcite-avatica-go pull request #9: Update dependencies

2018-03-04 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/9

Update dependencies



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/calcite-avatica-go bump-deps

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/9.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #9


commit 92feda3363052636b3390fa9b7f3a11dcd3b9215
Author: Francis Chuang <francis.chuang@...>
Date:   2018-03-04T21:35:02Z

Update dependencies




---


[GitHub] calcite-avatica-go pull request #10: Bump Go to 1.10 and format code

2018-03-04 Thread F21
GitHub user F21 opened a pull request:

https://github.com/apache/calcite-avatica-go/pull/10

Bump Go to 1.10 and format code



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/calcite-avatica-go bump-go

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/calcite-avatica-go/pull/10.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #10


commit c2484c64537255aaf6cc8d9cdd99c91471eac400
Author: Francis Chuang <francis.chuang@...>
Date:   2018-03-04T22:16:24Z

Bump Go to 1.10 and format code




---


Re: [DISCUSS] Towards Avatica 1.11.0

2018-02-27 Thread F21
Also would like to see 
https://issues.apache.org/jira/projects/CALCITE/issues/CALCITE-1951 
worked on.


Francis

On 28/02/2018 9:34 AM, Julian Hyde wrote:

OK, new thread just for the Avatica release. What’s left to do before 1.11? 
What can we accomplish before Monday?

There are 20 open JIRA cases tagged for 1.11: 
https://issues.apache.org/jira/issues/?jql=project%20%3D%20CALCITE%20AND%20fixVersion%20%3D%20avatica-1.11.0%20and%20status%20!%3D%20Resolved%20
 
.

Josh, 12 of those issues are assigned to you. Do you intend to fix any of those? If 
so, make them dependencies of https://issues.apache.org/jira/browse/CALCITE-2182 
.

Haohui, you have https://issues.apache.org/jira/browse/CALCITE-1884 
. Please update the case 
when you make progress.

I’ll take a shot at https://issues.apache.org/jira/browse/CALCITE-1928 
, review 
https://github.com/apache/calcite-avatica/pull/23 
 = 
https://issues.apache.org/jira/browse/CALCITE-508 
, commit 
https://github.com/apache/calcite-avatica/pull/16 
, and of course write the release notes.

Anything else that needs to be done, or any fixes/pull requests that people 
would like to get in?

Julian







Re: [DISCUSS] Towards Avatica 1.11.0

2018-02-27 Thread F21
Do you guys think you can work on 
https://issues.apache.org/jira/projects/CALCITE/issues/CALCITE-2013 ?


I tried bumping HSQLDB in the pom.xml a few months ago, but there were a 
few failing tests. Bumping HSQLDB will solve a bunch of issues I saw 
when testing avatica-go against avatica with HSQLDB. This should put us 
closer to a release for avatica-go.


Francis

On 28/02/2018 9:34 AM, Julian Hyde wrote:

OK, new thread just for the Avatica release. What’s left to do before 1.11? 
What can we accomplish before Monday?

There are 20 open JIRA cases tagged for 1.11: 
https://issues.apache.org/jira/issues/?jql=project%20%3D%20CALCITE%20AND%20fixVersion%20%3D%20avatica-1.11.0%20and%20status%20!%3D%20Resolved%20
 
.

Josh, 12 of those issues are assigned to you. Do you intend to fix any of those? If 
so, make them dependencies of https://issues.apache.org/jira/browse/CALCITE-2182 
.

Haohui, you have https://issues.apache.org/jira/browse/CALCITE-1884 
. Please update the case 
when you make progress.

I’ll take a shot at https://issues.apache.org/jira/browse/CALCITE-1928 
, review 
https://github.com/apache/calcite-avatica/pull/23 
 = 
https://issues.apache.org/jira/browse/CALCITE-508 
, commit 
https://github.com/apache/calcite-avatica/pull/16 
, and of course write the release notes.

Anything else that needs to be done, or any fixes/pull requests that people 
would like to get in?

Julian