Re: How do I send binary data to avatica?

2016-04-17 Thread F21

Forgot to add: Yes, still using Hbase 1.1.4 with phoenix 4.7.0.

On 18/04/2016 3:31 AM, Josh Elser wrote:
Sorry, yes. I just didn't want to have to post the JSON elsewhere -- 
it would've been gross to include in an email. For JSON, it is base64 
encoded. This is not necessary for protobuf (which can natively handle 
raw bytes).


For #2, did you verify that the data made it into the database 
correctly? HBase/Phoenix still, right? Also, maybe this is a 
SquirrelSQL issue? Can you verify the record is present via Phoenix's 
sqlline-thin.py?


For #3, very interesting. I'm not sure why base64 encoding it by hand 
makes any difference for you. Avatica isn't going to be making any 
differentiation in the types of bytes that you send. Bytes are bytes 
are bytes as far as we care.


F21 wrote:

Just reporting back my experiments:

1. Using JSON: I set the serialization of the query server to JSON and
replayed your requests using CURL. For the binary file, I first base64
encode it into a string and then sent it. This worked properly and I can
see data inserted into the table using SquirrelSQL.

2. Using Protobufs: I inserted the binary data using Rep_BYTE_STRING and
set BytesValues to the byte array read in from the file. It inserts
(upserts) correctly, but if I query the table using SquirrelSQL, the
binary column's cell is shown as .

3. Using Protobufs with Base64 encoding: I first encode the binary data
as base64. I then upsert the parameter as Rep_STRING and set StringValue
to the base64 encoded string. This upserts correctly and I can see the
data in SquirrelSQL. I then SELECT the data and base64 decode it and
write it to a file and generate a hash. The file is written correctly
and the hash also matches.

So, approach 3 works for me, but it doesn't seem to be the correct way
to do it.

On 17/04/2016 8:17 AM, Josh Elser wrote:

I wrote a simple test case for this with the gopher image. Here's the
JSON data (but hopefully this is enough to help out). I'd have to
write more code to dump out the actual protobufs. Let me know if this
is insufficient to help you figure out what's wrong. My test worked 
fine.


CREATE TABLE binaryData(id int, data varbinary(262144));

[{"request":"prepare","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","sql":"INSERT 


INTO binaryData values(?,?)","maxRowCount":-1},
{"response":"prepare","statement":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":{"columns":[],"sql":"INSERT 


INTO binaryData
values(?,?)","parameters":[{"signed":true,"precision":10,"scale":0,"parameterType":4,"typeName":"INTEGER","className":"java.lang.Integer","name":"?1"},{"signed":false,"precision":262144,"scale":0,"parameterType":-3,"typeName":"VARBINARY","className":"[B","name":"?2"}],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":"SELECT"}},"rpcMetadata":null}] 




[{"request":"execute","statementHandle":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":null},"parameterValues":[{"type":"INTEGER","value":1},{"type":"BYTE_STRING","value":""}],"maxRowCount":100}, 

{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1,"ownStatement":true,"signature":null,"firstFrame":null,"updateCount":1,"rpcMetadata":null}]}] 



[{"request":"closeStatement","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1}, 


{"response":"closeStatement","rpcMetadata":null}]

[{"request":"prepareAndExecute","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"sql":"SELECT 


*
FROM binaryData","maxRowCount":-1},
{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","oonnectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"ownStatement":true,"signature":{"columns":[{"ordinal":0,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":1,"signed":true,"displaySize":11,"label":"ID","columnName":"ID","schemaName":"SCOTT","precision":32,"scale":0,"tableName":"BINARYDATA","catalogName":"PUBLIC","type":{"type":"scalar","id":4,"name":"INTEGER","rep":"PRIMITIVE_INT"},"readOnly":false,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.Integer"},{"ordinal":1,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":1,"signed":false,"displaySize":262144,"label":"DATA","columnName":"DATA","schemaName":"SCOTT","precision":262144,"scale":0,"tableName":"BINARYDATA","catalogName":"PUBLIC","type":{"type":"scalar","id":-3,"name":"VARBINARY","rep":"B 


YTE



Re: How do I send binary data to avatica?

2016-04-17 Thread F21
I have now created a repo on github containing all the binary protobuf 
requests and responses to insert binary data into a VARBINARY column. 
The repo is available here: https://github.com/F21/avatica-binary-protobufs


The file 8-request is of most interest because that's the one where we 
are sending the ExecuteRequest to do the actual insertion of the image.


I am using the protoc tool to decode the messages, e.g.:  protoc 
--decode_raw < 8-request


Let me know if this helps :)

On 18/04/2016 3:31 AM, Josh Elser wrote:
Sorry, yes. I just didn't want to have to post the JSON elsewhere -- 
it would've been gross to include in an email. For JSON, it is base64 
encoded. This is not necessary for protobuf (which can natively handle 
raw bytes).


For #2, did you verify that the data made it into the database 
correctly? HBase/Phoenix still, right? Also, maybe this is a 
SquirrelSQL issue? Can you verify the record is present via Phoenix's 
sqlline-thin.py?


For #3, very interesting. I'm not sure why base64 encoding it by hand 
makes any difference for you. Avatica isn't going to be making any 
differentiation in the types of bytes that you send. Bytes are bytes 
are bytes as far as we care.


F21 wrote:

Just reporting back my experiments:

1. Using JSON: I set the serialization of the query server to JSON and
replayed your requests using CURL. For the binary file, I first base64
encode it into a string and then sent it. This worked properly and I can
see data inserted into the table using SquirrelSQL.

2. Using Protobufs: I inserted the binary data using Rep_BYTE_STRING and
set BytesValues to the byte array read in from the file. It inserts
(upserts) correctly, but if I query the table using SquirrelSQL, the
binary column's cell is shown as .

3. Using Protobufs with Base64 encoding: I first encode the binary data
as base64. I then upsert the parameter as Rep_STRING and set StringValue
to the base64 encoded string. This upserts correctly and I can see the
data in SquirrelSQL. I then SELECT the data and base64 decode it and
write it to a file and generate a hash. The file is written correctly
and the hash also matches.

So, approach 3 works for me, but it doesn't seem to be the correct way
to do it.

On 17/04/2016 8:17 AM, Josh Elser wrote:

I wrote a simple test case for this with the gopher image. Here's the
JSON data (but hopefully this is enough to help out). I'd have to
write more code to dump out the actual protobufs. Let me know if this
is insufficient to help you figure out what's wrong. My test worked 
fine.


CREATE TABLE binaryData(id int, data varbinary(262144));

[{"request":"prepare","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","sql":"INSERT 


INTO binaryData values(?,?)","maxRowCount":-1},
{"response":"prepare","statement":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":{"columns":[],"sql":"INSERT 


INTO binaryData
values(?,?)","parameters":[{"signed":true,"precision":10,"scale":0,"parameterType":4,"typeName":"INTEGER","className":"java.lang.Integer","name":"?1"},{"signed":false,"precision":262144,"scale":0,"parameterType":-3,"typeName":"VARBINARY","className":"[B","name":"?2"}],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":"SELECT"}},"rpcMetadata":null}] 




[{"request":"execute","statementHandle":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":null},"parameterValues":[{"type":"INTEGER","value":1},{"type":"BYTE_STRING","value":""}],"maxRowCount":100}, 

{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1,"ownStatement":true,"signature":null,"firstFrame":null,"updateCount":1,"rpcMetadata":null}]}] 



[{"request":"closeStatement","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1}, 


{"response":"closeStatement","rpcMetadata":null}]

[{"request":"prepareAndExecute","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"sql":"SELECT 


*
FROM binaryData","maxRowCount":-1},

Re: How do I send binary data to avatica?

2016-04-17 Thread F21
I just tried using sqlline-thin and it's also showing that the binary 
values are not inserted.


Interestingly, from my experiments, if I send a STRING to update a 
binary column, it works. If I send a BYTE_STRING, it doesn't work.


Would it help if I create a repo on github with containing the encoded 
protobufs for each request so you can see if the protobufs are 
constructed correctly?


On 18/04/2016 3:31 AM, Josh Elser wrote:
Sorry, yes. I just didn't want to have to post the JSON elsewhere -- 
it would've been gross to include in an email. For JSON, it is base64 
encoded. This is not necessary for protobuf (which can natively handle 
raw bytes).


For #2, did you verify that the data made it into the database 
correctly? HBase/Phoenix still, right? Also, maybe this is a 
SquirrelSQL issue? Can you verify the record is present via Phoenix's 
sqlline-thin.py?


For #3, very interesting. I'm not sure why base64 encoding it by hand 
makes any difference for you. Avatica isn't going to be making any 
differentiation in the types of bytes that you send. Bytes are bytes 
are bytes as far as we care.


F21 wrote:

Just reporting back my experiments:

1. Using JSON: I set the serialization of the query server to JSON and
replayed your requests using CURL. For the binary file, I first base64
encode it into a string and then sent it. This worked properly and I can
see data inserted into the table using SquirrelSQL.

2. Using Protobufs: I inserted the binary data using Rep_BYTE_STRING and
set BytesValues to the byte array read in from the file. It inserts
(upserts) correctly, but if I query the table using SquirrelSQL, the
binary column's cell is shown as .

3. Using Protobufs with Base64 encoding: I first encode the binary data
as base64. I then upsert the parameter as Rep_STRING and set StringValue
to the base64 encoded string. This upserts correctly and I can see the
data in SquirrelSQL. I then SELECT the data and base64 decode it and
write it to a file and generate a hash. The file is written correctly
and the hash also matches.

So, approach 3 works for me, but it doesn't seem to be the correct way
to do it.

On 17/04/2016 8:17 AM, Josh Elser wrote:

I wrote a simple test case for this with the gopher image. Here's the
JSON data (but hopefully this is enough to help out). I'd have to
write more code to dump out the actual protobufs. Let me know if this
is insufficient to help you figure out what's wrong. My test worked 
fine.


CREATE TABLE binaryData(id int, data varbinary(262144));

[{"request":"prepare","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","sql":"INSERT 


INTO binaryData values(?,?)","maxRowCount":-1},
{"response":"prepare","statement":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":{"columns":[],"sql":"INSERT 


INTO binaryData
values(?,?)","parameters":[{"signed":true,"precision":10,"scale":0,"parameterType":4,"typeName":"INTEGER","className":"java.lang.Integer","name":"?1"},{"signed":false,"precision":262144,"scale":0,"parameterType":-3,"typeName":"VARBINARY","className":"[B","name":"?2"}],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":"SELECT"}},"rpcMetadata":null}] 




[{"request":"execute","statementHandle":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":null},"parameterValues":[{"type":"INTEGER","value":1},{"type":"BYTE_STRING","value":""}],"maxRowCount":100}, 

{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1,"ownStatement":true,"signature":null,"firstFrame":null,"updateCount":1,"rpcMetadata":null}]}] 



[{"request":"closeStatement","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1}, 


{"response":"closeStatement","rpcMetadata":null}]

[{"request":"prepareAndExecute","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"sql":"SELECT 


*
FROM binaryData","maxRowCount":-1},

Calcite-Master-JDK-1.7 - Build # 54 - Fixed

2016-04-17 Thread Apache Jenkins Server
The Apache Jenkins build system has built Calcite-Master-JDK-1.7 (build #54)

Status: Fixed

Check console output at 
https://builds.apache.org/job/Calcite-Master-JDK-1.7/54/ to view the results.

Re: jekyll/github-pages gem version update

2016-04-17 Thread Julian Hyde
Ok, I hope upgrading jekyll on ubuntu is easy. I’ve tried a couple of times and 
quickly got mired in incompatible versions ruby.

> On Apr 17, 2016, at 2:30 PM, Josh Elser  wrote:
> 
> In case you missed CALCITE-1202:
> 
> tl;dr `cd site; rm Gemfile.lock; bundle install` the next time you want to 
> update the website.
> 
> I was a little cavalier in making this update because of the potential 
> variance for developers who want to edit the website. This will bring us all 
> on the same page and, hopefully, make updating the website less scary :)
> 
> I did make a pass over the website to make sure the update from jekyll 2 to 
> jekyll 3 didn't break anything, but did not find any issues. If you see 
> something wrong, feel free to send a note and I'll try to look into it!
> 
>  Original Message 
> Subject: [jira] [Resolved] (CALCITE-1202) Lock version of Jekyll used by 
> website
> Date: Sun, 17 Apr 2016 21:13:25 + (UTC)
> From: Josh Elser (JIRA) 
> Reply-To: dev@calcite.apache.org
> To: iss...@calcite.apache.org
> 
> 
> [ 
> https://issues.apache.org/jira/browse/CALCITE-1202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
>  ]
> 
> Josh Elser resolved CALCITE-1202.
> -
>Resolution: Fixed
> 
> Fixed in 
> https://git1-us-west.apache.org/repos/asf?p=calcite.git;a=commit;h=b66d414661926afdb82aea71432fa18022a666e2
> 
>> Lock version of Jekyll used by website
>> --
>> 
>>Key: CALCITE-1202
>>URL: https://issues.apache.org/jira/browse/CALCITE-1202
>>Project: Calcite
>> Issue Type: Improvement
>>   Reporter: Josh Elser
>>   Assignee: Josh Elser
>>   Priority: Trivial
>>Fix For: 1.8.0, avatica-1.8.0
>> 
>> 
>> I had a newer version of Jekyll installed for other projects and realized 
>> that this was causing a significant amount of changes in the generated HTML.
>> We should lock jekyll-2.4.0 as the version we want to use in the Gemfile. 
>> This will prevent unintended changes to HTML due to the installed version of 
>> Jekyll being potentially different across different developer machines. 
>> Presently, this is being set by the github-pages gem implicitly.
> 
> 
> 
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)



jekyll/github-pages gem version update

2016-04-17 Thread Josh Elser

In case you missed CALCITE-1202:

tl;dr `cd site; rm Gemfile.lock; bundle install` the next time you want 
to update the website.


I was a little cavalier in making this update because of the potential 
variance for developers who want to edit the website. This will bring us 
all on the same page and, hopefully, make updating the website less scary :)


I did make a pass over the website to make sure the update from jekyll 2 
to jekyll 3 didn't break anything, but did not find any issues. If you 
see something wrong, feel free to send a note and I'll try to look into it!


 Original Message 
Subject: [jira] [Resolved] (CALCITE-1202) Lock version of Jekyll used by 
website

Date: Sun, 17 Apr 2016 21:13:25 + (UTC)
From: Josh Elser (JIRA) 
Reply-To: dev@calcite.apache.org
To: iss...@calcite.apache.org


 [ 
https://issues.apache.org/jira/browse/CALCITE-1202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel 
]


Josh Elser resolved CALCITE-1202.
-
Resolution: Fixed

Fixed in 
https://git1-us-west.apache.org/repos/asf?p=calcite.git;a=commit;h=b66d414661926afdb82aea71432fa18022a666e2



Lock version of Jekyll used by website
--

Key: CALCITE-1202
URL: https://issues.apache.org/jira/browse/CALCITE-1202
Project: Calcite
 Issue Type: Improvement
   Reporter: Josh Elser
   Assignee: Josh Elser
   Priority: Trivial
Fix For: 1.8.0, avatica-1.8.0


I had a newer version of Jekyll installed for other projects and realized that 
this was causing a significant amount of changes in the generated HTML.
We should lock jekyll-2.4.0 as the version we want to use in the Gemfile. This 
will prevent unintended changes to HTML due to the installed version of Jekyll 
being potentially different across different developer machines. Presently, 
this is being set by the github-pages gem implicitly.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CALCITE-1203) Update to newest stable github-pages gem

2016-04-17 Thread Josh Elser (JIRA)
Josh Elser created CALCITE-1203:
---

 Summary: Update to newest stable github-pages gem
 Key: CALCITE-1203
 URL: https://issues.apache.org/jira/browse/CALCITE-1203
 Project: Calcite
  Issue Type: Improvement
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Trivial


According to https://pages.github.com/versions/, the current version of the 
github-pages Gem is 69 (I presently have 43 in my Gemfile.lock).

To make sure we're all building consistent versions of the website, we should 
specify a version of the github-pages gem (which will also ensure that we have 
consistent versions of Jekyll in use).

We'll probably have to make a glance over the website to make sure everything 
is still rendered correctly after the version update.

If that goes well, we should remove the explicit jekyll gem dependency in 
Gemfile and just add a specific github-pages version instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CALCITE-1202) Lock version of Jekyll used by website

2016-04-17 Thread Josh Elser (JIRA)
Josh Elser created CALCITE-1202:
---

 Summary: Lock version of Jekyll used by website
 Key: CALCITE-1202
 URL: https://issues.apache.org/jira/browse/CALCITE-1202
 Project: Calcite
  Issue Type: Improvement
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Trivial
 Fix For: 1.8.0, avatica-1.8.0


I had a newer version of Jekyll installed for other projects and realized that 
this was causing a significant amount of changes in the generated HTML.

We should lock jekyll-2.4.0 as the version we want to use in the Gemfile. This 
will prevent unintended changes to HTML due to the installed version of Jekyll 
being potentially different across different developer machines. Presently, 
this is being set by the github-pages gem implicitly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Calcite-Master-JDK-1.8 - Build # 54 - Still Failing

2016-04-17 Thread Apache Jenkins Server
The Apache Jenkins build system has built Calcite-Master-JDK-1.8 (build #54)

Status: Still Failing

Check console output at 
https://builds.apache.org/job/Calcite-Master-JDK-1.8/54/ to view the results.

[jira] [Created] (CALCITE-1201) Bad character in JSON docs

2016-04-17 Thread Josh Elser (JIRA)
Josh Elser created CALCITE-1201:
---

 Summary: Bad character in JSON docs
 Key: CALCITE-1201
 URL: https://issues.apache.org/jira/browse/CALCITE-1201
 Project: Calcite
  Issue Type: Task
  Components: avatica
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Trivial
 Fix For: avatica-1.8.0


A user pointed out on the u...@phoenix.apache.org mailing list that the 
PrepareAndExecuteBatch request has an incorrect character in the JSON docs ("," 
is present instead of ":" separating and attribute's name and value).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: How do I send binary data to avatica?

2016-04-17 Thread Josh Elser
Sorry, yes. I just didn't want to have to post the JSON elsewhere -- it 
would've been gross to include in an email. For JSON, it is base64 
encoded. This is not necessary for protobuf (which can natively handle 
raw bytes).


For #2, did you verify that the data made it into the database 
correctly? HBase/Phoenix still, right? Also, maybe this is a SquirrelSQL 
issue? Can you verify the record is present via Phoenix's sqlline-thin.py?


For #3, very interesting. I'm not sure why base64 encoding it by hand 
makes any difference for you. Avatica isn't going to be making any 
differentiation in the types of bytes that you send. Bytes are bytes are 
bytes as far as we care.


F21 wrote:

Just reporting back my experiments:

1. Using JSON: I set the serialization of the query server to JSON and
replayed your requests using CURL. For the binary file, I first base64
encode it into a string and then sent it. This worked properly and I can
see data inserted into the table using SquirrelSQL.

2. Using Protobufs: I inserted the binary data using Rep_BYTE_STRING and
set BytesValues to the byte array read in from the file. It inserts
(upserts) correctly, but if I query the table using SquirrelSQL, the
binary column's cell is shown as .

3. Using Protobufs with Base64 encoding: I first encode the binary data
as base64. I then upsert the parameter as Rep_STRING and set StringValue
to the base64 encoded string. This upserts correctly and I can see the
data in SquirrelSQL. I then SELECT the data and base64 decode it and
write it to a file and generate a hash. The file is written correctly
and the hash also matches.

So, approach 3 works for me, but it doesn't seem to be the correct way
to do it.

On 17/04/2016 8:17 AM, Josh Elser wrote:

I wrote a simple test case for this with the gopher image. Here's the
JSON data (but hopefully this is enough to help out). I'd have to
write more code to dump out the actual protobufs. Let me know if this
is insufficient to help you figure out what's wrong. My test worked fine.

CREATE TABLE binaryData(id int, data varbinary(262144));

[{"request":"prepare","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","sql":"INSERT
INTO binaryData values(?,?)","maxRowCount":-1},
{"response":"prepare","statement":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":{"columns":[],"sql":"INSERT
INTO binaryData
values(?,?)","parameters":[{"signed":true,"precision":10,"scale":0,"parameterType":4,"typeName":"INTEGER","className":"java.lang.Integer","name":"?1"},{"signed":false,"precision":262144,"scale":0,"parameterType":-3,"typeName":"VARBINARY","className":"[B","name":"?2"}],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":"SELECT"}},"rpcMetadata":null}]


[{"request":"execute","statementHandle":{"connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","id":1,"signature":null},"parameterValues":[{"type":"INTEGER","value":1},{"type":"BYTE_STRING","value":""}],"maxRowCount":100},
{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1,"ownStatement":true,"signature":null,"firstFrame":null,"updateCount":1,"rpcMetadata":null}]}]

[{"request":"closeStatement","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":1},
{"response":"closeStatement","rpcMetadata":null}]

[{"request":"prepareAndExecute","connectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"sql":"SELECT
*
FROM binaryData","maxRowCount":-1},
{"response":"executeResults","missingStatement":false,"rpcMetadata":null,"results":[{"response":"resultSet","oonnectionId":"b387f9f2-6e64-40c3-a26f-f9cd955bc0a7","statementId":0,"ownStatement":true,"signature":{"columns":[{"ordinal":0,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":1,"signed":true,"displaySize":11,"label":"ID","columnName":"ID","schemaName":"SCOTT","precision":32,"scale":0,"tableName":"BINARYDATA","catalogName":"PUBLIC","type":{"type":"scalar","id":4,"name":"INTEGER","rep":"PRIMITIVE_INT"},"readOnly":false,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.Integer"},{"ordinal":1,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":1,"signed":false,"displaySize":262144,"label":"DATA","columnName":"DATA","schemaName":"SCOTT","precision":262144,"scale":0,"tableName":"BINARYDATA","catalogName":"PUBLIC","type":{"type":"scalar","id":-3,"name":"VARBINARY","rep":"B

YTE


_STRING"},"readOnly":false,"writable":false,"definitelyWritable":false,"columnClassName":"[B"}],"sql":null,"parameters":[],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":"SELECT"},"firstFrame":{"offset":0,"done":true,"rows":[[1,""]]},"updateCount":-1,"rpcMetadata":null}]}]


Josh Elser wrote:

Super helpful! Thanks.

I'll try to take a look at this tonight. If not then, over the weekend,
likely. Will