Re: Replacing solr with pheonix for XML data

2016-04-14 Thread Randy Gelhausen
Hi Vikram,

This question comes up somewhat often, so I'd be very interested in
comments from other users of Solr and Phoenix as well.

You'd need to either pre-parse fields from XML and save them as column
values to use in queries, or use a UDF to dynamically parse XML stored
within a single column.

Solr and Phoenix are similar in that they support indexing for fast
filtering, but Solr seems to do a better job combining arbitrary
combinations of indexed fields in a search. Phoenix can't combine arbitrary
indexes without diligent use of hints or creative pre-filtering within
subqueries.

So, if you know which fields you plan to query on (and can make use of
well-thought out indexes) and are able to pre-parse your XML into
additional columns, Phoenix is a good fit. In particular, if you always
know at least some of the highest order primary key bits and can therefore
ensure queries are range-scans, it might also be a good fit.

What do others think?

Thanks,
-Randy

On Thu, Apr 14, 2016 at 11:31 AM,  wrote:

> Hi,
>
>
>
> Are there any use cases ,resources or experience  substitutingsolr
> with phoenix forsemi-structured data like XML.
>
>
>
>
>
> Thanks
>
> vikram
>
> ___
>
> This message is for information purposes only, it is not a recommendation,
> advice, offer or solicitation to buy or sell a product or service nor an
> official confirmation of any transaction. It is directed at persons who are
> professionals and is not intended for retail customer use. Intended for
> recipient only. This message is subject to the terms at:
> www.barclays.com/emaildisclaimer.
>
> For important disclosures, please see:
> www.barclays.com/salesandtradingdisclaimer regarding market commentary
> from Barclays Sales and/or Trading, who are active market participants; and
> in respect of Barclays Research, including disclosures relating to specific
> issuers, please see http://publicresearch.barclays.com.
>
> ___
>


Re: Secondary indexes on dynamic columns

2016-04-14 Thread James Taylor
No, it's currently not possible to have a secondary index on dynamic
columns. You can, however, create a view with new, ad hoc columns and add a
secondary index on the view.

On Thu, Apr 14, 2016 at 2:38 PM,  wrote:

> Hi,
>
>
>
> Is there a way to make phoenix build indexes on secondary columns?
>
>
>
> vikram
>
> ___
>
> This message is for information purposes only, it is not a recommendation,
> advice, offer or solicitation to buy or sell a product or service nor an
> official confirmation of any transaction. It is directed at persons who are
> professionals and is not intended for retail customer use. Intended for
> recipient only. This message is subject to the terms at:
> www.barclays.com/emaildisclaimer.
>
> For important disclosures, please see:
> www.barclays.com/salesandtradingdisclaimer regarding market commentary
> from Barclays Sales and/or Trading, who are active market participants; and
> in respect of Barclays Research, including disclosures relating to specific
> issuers, please see http://publicresearch.barclays.com.
>
> ___
>


Secondary indexes on dynamic columns

2016-04-14 Thread vikram.kondadasula
Hi,

Is there a way to make phoenix build indexes on secondary columns?

vikram

___

This message is for information purposes only, it is not a recommendation, 
advice, offer or solicitation to buy or sell a product or service nor an 
official confirmation of any transaction. It is directed at persons who are 
professionals and is not intended for retail customer use. Intended for 
recipient only. This message is subject to the terms at: 
www.barclays.com/emaildisclaimer.

For important disclosures, please see: 
www.barclays.com/salesandtradingdisclaimer regarding market commentary from 
Barclays Sales and/or Trading, who are active market participants; and in 
respect of Barclays Research, including disclosures relating to specific 
issuers, please see http://publicresearch.barclays.com.

___


Re: Column Cardinality and Stats table as an "interface"

2016-04-14 Thread James Taylor
Thanks for the clarifications, Nick. That's a cool idea for cube building -
I'm not aware of any JIRAs for that.

FYI, for approximate count, we have PHOENIX-418 which Ravi is working on. I
think he was looking at using a HyperLogLog library, but perhaps BlinkDB is
an alternative.

On Thu, Apr 14, 2016 at 2:01 PM, Nick Dimiduk  wrote:

> The stats table would purely be used to drive optimizer decisions in
>> Phoenix. The data in the table is only collected during major compaction
>> (or when an update stats is run manually), so it's not really meant for
>> satisfying queries.
>>
>> For Kylin integration, we'd rely on Kylin to maintain the cubes and
>> Calcite would be the glue that allows both Phoenix and Kylin to cooperate
>> at planning time. I'm sure there'd be other runtime pieces required to make
>> it work.
>>
>
> Understood. I'm not talking about query time. As I understand Kylin's
> current state, it builds cubes from data in Hive tables conforming to a
> star schema. My thinking is for an end-to-end Phoenix-driven data store,
> where Kylin uses data stored in Phoenix as the source for building the
> cubes. We don't store data in this schema structure in Phoenix, so
> cube-building could be optimized by Phoenix's own stats table, instead of
> cardinality queries running against Hive. In this deployment scenario, I
> see no place for Hive at all.
>
> I have no idea on the feasibility of BlinkDB integration, but conceptually
>> BlinkDB could probably be used as a statistics provider for Phoenix.
>>
>
> I'm not talking about integration. I'm suggesting phoenix could support an
> 'approximate count' operator that generated a result based on queries to
> the stats table. "Roughly how many rows are in this table?" Given the cost
> of an actual row count, this would be a useful functionality to provide.
>
> On Thu, Apr 14, 2016 at 1:05 PM, Nick Dimiduk  wrote:
>>
>>> Ah, okay. Thanks for the pointer to PHOENIX-1178. Do you think the
>>> stats table is the right place for this kind of info? Seems like the only
>>> choice. Is there a plan to make the stats table a stable internal API? For
>>> instance, integration with Kylin for building Cubes off of denormalized
>>> event tables in Phoenix, or supporting BlinkDB approximation queries could
>>> both be facilitated by the stats table.
>>>
>>> -n
>>>
>>> On Thu, Apr 14, 2016 at 12:24 PM, James Taylor 
>>> wrote:
>>>
 FYI, Lars H. is looking at PHOENIX-258 for improving performance of
 DISTINCT. We don't yet keep any cardinality info in our stats
 (see PHOENIX-1178).

 Thanks,
 James

 On Thu, Apr 14, 2016 at 11:22 AM, Nick Dimiduk 
 wrote:

> Hello,
>
> I'm curious if there are any tricks for estimating the cardinality of
> the values in a phoenix column. Even for leading rowkey column, a select
> distinct query on a large table requires a full scan (PHOENIX-258). Maybe
> one could reach into the stats table and derive some knowledge? How much 
> of
> a "bad thing" would this be?
>
> Thanks,
> Nick
>


>>>
>>
>


Re: Column Cardinality and Stats table as an "interface"

2016-04-14 Thread Nick Dimiduk
>
> The stats table would purely be used to drive optimizer decisions in
> Phoenix. The data in the table is only collected during major compaction
> (or when an update stats is run manually), so it's not really meant for
> satisfying queries.
>
> For Kylin integration, we'd rely on Kylin to maintain the cubes and
> Calcite would be the glue that allows both Phoenix and Kylin to cooperate
> at planning time. I'm sure there'd be other runtime pieces required to make
> it work.
>

Understood. I'm not talking about query time. As I understand Kylin's
current state, it builds cubes from data in Hive tables conforming to a
star schema. My thinking is for an end-to-end Phoenix-driven data store,
where Kylin uses data stored in Phoenix as the source for building the
cubes. We don't store data in this schema structure in Phoenix, so
cube-building could be optimized by Phoenix's own stats table, instead of
cardinality queries running against Hive. In this deployment scenario, I
see no place for Hive at all.

I have no idea on the feasibility of BlinkDB integration, but conceptually
> BlinkDB could probably be used as a statistics provider for Phoenix.
>

I'm not talking about integration. I'm suggesting phoenix could support an
'approximate count' operator that generated a result based on queries to
the stats table. "Roughly how many rows are in this table?" Given the cost
of an actual row count, this would be a useful functionality to provide.

On Thu, Apr 14, 2016 at 1:05 PM, Nick Dimiduk  wrote:
>
>> Ah, okay. Thanks for the pointer to PHOENIX-1178. Do you think the stats
>> table is the right place for this kind of info? Seems like the only choice.
>> Is there a plan to make the stats table a stable internal API? For
>> instance, integration with Kylin for building Cubes off of denormalized
>> event tables in Phoenix, or supporting BlinkDB approximation queries could
>> both be facilitated by the stats table.
>>
>> -n
>>
>> On Thu, Apr 14, 2016 at 12:24 PM, James Taylor 
>> wrote:
>>
>>> FYI, Lars H. is looking at PHOENIX-258 for improving performance of
>>> DISTINCT. We don't yet keep any cardinality info in our stats
>>> (see PHOENIX-1178).
>>>
>>> Thanks,
>>> James
>>>
>>> On Thu, Apr 14, 2016 at 11:22 AM, Nick Dimiduk 
>>> wrote:
>>>
 Hello,

 I'm curious if there are any tricks for estimating the cardinality of
 the values in a phoenix column. Even for leading rowkey column, a select
 distinct query on a large table requires a full scan (PHOENIX-258). Maybe
 one could reach into the stats table and derive some knowledge? How much of
 a "bad thing" would this be?

 Thanks,
 Nick

>>>
>>>
>>
>


Map the hbase column qualifier which is in byte type to phoenix table view

2016-04-14 Thread Viswanathan J
Hi,

How to map the HBase column qualifier which is in byte type(highlighted
below) to the view in phoenix?

eg.,

\x00\x00\x00\x0Bcolumn=fact:\x05, timestamp=1460666736042,
value=\x02\x9E.\x8A

Please help.

-- 
Regards,
Viswa.J


Re: Column Cardinality and Stats table as an "interface"

2016-04-14 Thread James Taylor
The stats table would purely be used to drive optimizer decisions in
Phoenix. The data in the table is only collected during major compaction
(or when an update stats is run manually), so it's not really meant for
satisfying queries.

For Kylin integration, we'd rely on Kylin to maintain the cubes and Calcite
would be the glue that allows both Phoenix and Kylin to cooperate at
planning time. I'm sure there'd be other runtime pieces required to make it
work.

I have no idea on the feasibility of BlinkDB integration, but conceptually
BlinkDB could probably be used as a statistics provider for Phoenix.

On Thu, Apr 14, 2016 at 1:05 PM, Nick Dimiduk  wrote:

> Ah, okay. Thanks for the pointer to PHOENIX-1178. Do you think the stats
> table is the right place for this kind of info? Seems like the only choice.
> Is there a plan to make the stats table a stable internal API? For
> instance, integration with Kylin for building Cubes off of denormalized
> event tables in Phoenix, or supporting BlinkDB approximation queries could
> both be facilitated by the stats table.
>
> -n
>
> On Thu, Apr 14, 2016 at 12:24 PM, James Taylor 
> wrote:
>
>> FYI, Lars H. is looking at PHOENIX-258 for improving performance of
>> DISTINCT. We don't yet keep any cardinality info in our stats
>> (see PHOENIX-1178).
>>
>> Thanks,
>> James
>>
>> On Thu, Apr 14, 2016 at 11:22 AM, Nick Dimiduk 
>> wrote:
>>
>>> Hello,
>>>
>>> I'm curious if there are any tricks for estimating the cardinality of
>>> the values in a phoenix column. Even for leading rowkey column, a select
>>> distinct query on a large table requires a full scan (PHOENIX-258). Maybe
>>> one could reach into the stats table and derive some knowledge? How much of
>>> a "bad thing" would this be?
>>>
>>> Thanks,
>>> Nick
>>>
>>
>>
>


Re: Column Cardinality and Stats table as an "interface"

2016-04-14 Thread James Taylor
FYI, Lars H. is looking at PHOENIX-258 for improving performance of
DISTINCT. We don't yet keep any cardinality info in our stats
(see PHOENIX-1178).

Thanks,
James

On Thu, Apr 14, 2016 at 11:22 AM, Nick Dimiduk  wrote:

> Hello,
>
> I'm curious if there are any tricks for estimating the cardinality of the
> values in a phoenix column. Even for leading rowkey column, a select
> distinct query on a large table requires a full scan (PHOENIX-258). Maybe
> one could reach into the stats table and derive some knowledge? How much of
> a "bad thing" would this be?
>
> Thanks,
> Nick
>


Column Cardinality and Stats table as an "interface"

2016-04-14 Thread Nick Dimiduk
Hello,

I'm curious if there are any tricks for estimating the cardinality of the
values in a phoenix column. Even for leading rowkey column, a select
distinct query on a large table requires a full scan (PHOENIX-258). Maybe
one could reach into the stats table and derive some knowledge? How much of
a "bad thing" would this be?

Thanks,
Nick


Replacing solr with pheonix for XML data

2016-04-14 Thread vikram.kondadasula
Hi,

Are there any use cases ,resources or experience  substitutingsolr with 
phoenix forsemi-structured data like XML.


Thanks
vikram

___

This message is for information purposes only, it is not a recommendation, 
advice, offer or solicitation to buy or sell a product or service nor an 
official confirmation of any transaction. It is directed at persons who are 
professionals and is not intended for retail customer use. Intended for 
recipient only. This message is subject to the terms at: 
www.barclays.com/emaildisclaimer.

For important disclosures, please see: 
www.barclays.com/salesandtradingdisclaimer regarding market commentary from 
Barclays Sales and/or Trading, who are active market participants; and in 
respect of Barclays Research, including disclosures relating to specific 
issuers, please see http://publicresearch.barclays.com.

___


Re: prepareAndExecute with UPSERT not working

2016-04-14 Thread Steve Terrell
I found it much easier and reliable to make my own phoenix HTTP server with
my own JSON API.  It was too confusing for me to send multiple requests for
what would normally be just one SQL statement.  And I had problems getting
upserts working, to boot (even with the thin server).  Now I can make the
API as simple as I like.  I am using Phoenix 4.6.0.

Good luck,
Steve

On Thu, Apr 14, 2016 at 8:12 AM, Plamen Paskov <
plamen.pas...@next-stream.com> wrote:

> Hey folks,
> I'm trying to UPSERT some data via the json api but no luck for now. My
> requests looks like:
>
> {
>   "request": "openConnection",
>   "connectionId": "6"
> }
>
> {
>   "request": "createStatement",
>   "connectionId": "6"
> }
>
> {
>   "request": "prepareAndExecute",
>   "connectionId": "6",
>   "statementId": 9,
>   "sql": "UPSERT INTO us_population VALUES('IL','Chicago',1000)",
>   "maxRowCount": -1
> }
>
> The response for prepareAndExecute looks like the operation is successful
> but i don't see the new data added to the table.
>
> {
>   "response": "executeResults",
>   "missingStatement": false,
>   "rpcMetadata": {
> "response": "rpcMetadata",
> "serverAddress": "ip-172-31-27-198:8765"
>   },
>   "results": [
> {
>   "response": "resultSet",
>   "connectionId": "6",
>   "statementId": 9,
>   "ownStatement": false,
>   "signature": null,
>   "firstFrame": null,
>   "updateCount": 1,
>   "rpcMetadata": {
> "response": "rpcMetadata",
> "serverAddress": "ip-172-31-27-198:8765"
>   }
> }
>   ]
> }
>
> Any ideas?
> Thanks in advance!
>


prepareAndExecute with UPSERT not working

2016-04-14 Thread Plamen Paskov

Hey folks,
I'm trying to UPSERT some data via the json api but no luck for now. My 
requests looks like:


{
  "request": "openConnection",
  "connectionId": "6"
}

{
  "request": "createStatement",
  "connectionId": "6"
}

{
  "request": "prepareAndExecute",
  "connectionId": "6",
  "statementId": 9,
  "sql": "UPSERT INTO us_population VALUES('IL','Chicago',1000)",
  "maxRowCount": -1
}

The response for prepareAndExecute looks like the operation is 
successful but i don't see the new data added to the table.


{
  "response": "executeResults",
  "missingStatement": false,
  "rpcMetadata": {
"response": "rpcMetadata",
"serverAddress": "ip-172-31-27-198:8765"
  },
  "results": [
{
  "response": "resultSet",
  "connectionId": "6",
  "statementId": 9,
  "ownStatement": false,
  "signature": null,
  "firstFrame": null,
  "updateCount": 1,
  "rpcMetadata": {
"response": "rpcMetadata",
"serverAddress": "ip-172-31-27-198:8765"
  }
}
  ]
}

Any ideas?
Thanks in advance!


create table like syntax

2016-04-14 Thread luohui20001
Hi guys As I know create table a like b syntax should have been supported 
since long before(refer to https://issues.apache.org/jira/browse/PHOENIX-734) 
,however when i am using phoenix 4.5, i got below exception:0: 
jdbc:phoenix:cnzk0,cnzk1,cnzk2> create table debug_visit like visit;Error: 
ERROR 604 (42P00): Syntax error. Mismatched input. Expecting "LPAREN", got 
"like" at line 1, column 26. 
(state=42P00,code=604)org.apache.phoenix.exception.PhoenixParserException: 
ERROR 604 (42P00): Syntax error. Mismatched input. Expecting "LPAREN", got 
"like" at line 1, column 26.at 
org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
 at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)   
 at 
org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1224)
   at 
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1305)
  at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1368) at 
sqlline.Commands.execute(Commands.java:822)  at 
sqlline.Commands.sql(Commands.java:732)  at 
sqlline.SqlLine.dispatch(SqlLine.java:808)   at 
sqlline.SqlLine.begin(SqlLine.java:681)  at 
sqlline.SqlLine.start(SqlLine.java:398)  at 
sqlline.SqlLine.main(SqlLine.java:292)Caused by: 
MismatchedTokenException(86!=90)at 
org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:349)
  at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)  
at 
org.apache.phoenix.parse.PhoenixSQLParser.create_table_node(PhoenixSQLParser.java:1032)
  at 
org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:782)
at 
org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:497)  
 at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)




















... 9 more
 So I checked the grammar page and found  no "like" semantic there. Is 
there a quick way to do the same thing?  Thank you.



 

ThanksBest regards!
San.Luo


Re: apache phoenix json api

2016-04-14 Thread Plamen Paskov

Now another error appears for prepare and execute batch request:



content="text/html;charset=ISO-8859-1"/>

Error 500 


HTTP ERROR: 500
Problem accessing /. Reason:

 
com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: 
Unrecognized field "statementId" (class 
org.apache.calcite.avatica.remote.Service$SchemasRequest), not marked as 
ignorable (3 known properties: , "connectionId", "catalog", 
"schemaPattern"])
 at [Source: java.io.StringReader@3b5c02a5; line: 6, column: 2] 
(through reference chain: 
org.apache.calcite.avatica.remote.SchemasRequest["statementId"])




Powered by Jetty://




My request looks like:
{
  "request": "prepareAndExecuteBatch",
  "connectionId": "3",
  "statementId": "2",
  "sqlCommands": [ "UPSERT INTO us_population VALUES('C1','City 
1',10)", "UPSERT INTO us_population VALUES('C2','City 2',100)" ]

}

Any help will be appreciated!
Thanks

On 14.04.2016 14:58, Plamen Paskov wrote:
Ah i found the error. It should be "sqlCommands": instead of 
"sqlCommands",

The documentation syntax is wrong for this request type:
http://calcite.apache.org/avatica/docs/json_reference.html#prepareandexecutebatchrequest 



On 14.04.2016 11:09, Plamen Paskov wrote:

@Josh: thanks for your answer.

Folks,
I'm trying to prepare and execute batch request with no luck.
These are the requests i send:

{
  "request": "openConnection",
  "connectionId": "2"
}

{
  "request": "createStatement",
  "connectionId": "2"
}

{
  "request": "prepareAndExecuteBatch",
  "connectionId": "2",
  "statementId": 1,
  "sqlCommands", [ "UPSERT INTO us_population(STATE,CITY,POPULATION) 
VALUES('C1','City 1',10)", "UPSERT INTO 
us_population(STATE,CITY,POPULATION) VALUES('C2','City 2',100)" ]

}

And this is the response i receive:



content="text/html;charset=ISO-8859-1"/>

Error 500 


HTTP ERROR: 500
Problem accessing /. Reason:

 com.fasterxml.jackson.core.JsonParseException: 
Unexpected character (',' (code 44)): was expecting a colon to 
separate field name and value

 at [Source: java.io.StringReader@41709697; line: 5, column: 17]



Powered by Jetty://







On 13.04.2016 19:27, Josh Elser wrote:
For reference materials: definitely check out 
https://calcite.apache.org/avatica/


While JSON is easy to get started with, there are zero guarantees on 
compatibility between versions. If you use protobuf, we should be 
able to hide all schema drift from you as a client (e.g. 
applications you write against Phoenix 4.7 should continue to work 
against 4.8, 4.9, etc).


Good luck with the PHP client -- feel free to reach out if you have 
more issues. Let us know you have something to shared. I'm sure 
others would also find it very useful.


F21 wrote:

I am currently building a golang client as well, so I've been looking
the api over the last few weeks.

I am not sure about the decision to have to create a statement first,
but in terms of go, it fits the sql package very well, where 
statements

are opened and closed.

I don't think there are any books (as of yet), but the references and
digging through the code should be quite useful. I also recommend
checking out the avatica project (which is a sub project of calcite)
which is used to power the query server.

Also, the query server uses protobufs by default now, so it would
probably be better to use that rather than the JSON api.

On 13/04/2016 10:21 PM, Plamen Paskov wrote:

thanks for your quick and accurate answer ! it's working now!
can you give me a brief explanation of why is it to mantain the state
via the json api so i can better understand how to create a php
wrapper library. if there are some books or references where i can
read more about apache phoenix will be very helpful.
thanks

On 13.04.2016 13:29, F21 wrote:

Your PrepareAndExecute request is missing a statementId:
https://calcite.apache.org/docs/avatica_json_reference.html#prepareandexecuterequest 




Before calling PrepareAndExecute, you need to send a CreateStatement
request to the server so that it can give you a statementId. Then,
use that statementId in your PrepareAndExecute request and all 
should

be fine :)


On 13/04/2016 8:24 PM, Plamen Paskov wrote:

Hi guys,
I just setup apache phoenix 4.7 and set the serialization to JSON.
Now i'm trying to run a select statement but what i receive is 
this:


{
"response": "executeResults",
"missingStatement": true,
"rpcMetadata": {
"response": "rpcMetadata",
"serverAddress": "ip-172-31-27-198:8765"
},
"results": null
}

My request looks like this:

curl -XPOST -H 'request: {"request":"prepareAndExecute",
"connectionId":"1", "sql":"select * from us_population",
"maxRowCount":-1}' http://52.31.63.96:8765/

Running the select above from the command line is fine and it
returns 2 rows :

sqlline version 1.1.8
0: 

Re: apache phoenix json api

2016-04-14 Thread Plamen Paskov

Ah i found the error. It should be "sqlCommands": instead of "sqlCommands",
The documentation syntax is wrong for this request type:
http://calcite.apache.org/avatica/docs/json_reference.html#prepareandexecutebatchrequest

On 14.04.2016 11:09, Plamen Paskov wrote:

@Josh: thanks for your answer.

Folks,
I'm trying to prepare and execute batch request with no luck.
These are the requests i send:

{
  "request": "openConnection",
  "connectionId": "2"
}

{
  "request": "createStatement",
  "connectionId": "2"
}

{
  "request": "prepareAndExecuteBatch",
  "connectionId": "2",
  "statementId": 1,
  "sqlCommands", [ "UPSERT INTO us_population(STATE,CITY,POPULATION) 
VALUES('C1','City 1',10)", "UPSERT INTO 
us_population(STATE,CITY,POPULATION) VALUES('C2','City 2',100)" ]

}

And this is the response i receive:



content="text/html;charset=ISO-8859-1"/>

Error 500 


HTTP ERROR: 500
Problem accessing /. Reason:

 com.fasterxml.jackson.core.JsonParseException: 
Unexpected character (',' (code 44)): was expecting a colon to 
separate field name and value

 at [Source: java.io.StringReader@41709697; line: 5, column: 17]



Powered by Jetty://







On 13.04.2016 19:27, Josh Elser wrote:
For reference materials: definitely check out 
https://calcite.apache.org/avatica/


While JSON is easy to get started with, there are zero guarantees on 
compatibility between versions. If you use protobuf, we should be 
able to hide all schema drift from you as a client (e.g. applications 
you write against Phoenix 4.7 should continue to work against 4.8, 
4.9, etc).


Good luck with the PHP client -- feel free to reach out if you have 
more issues. Let us know you have something to shared. I'm sure 
others would also find it very useful.


F21 wrote:

I am currently building a golang client as well, so I've been looking
the api over the last few weeks.

I am not sure about the decision to have to create a statement first,
but in terms of go, it fits the sql package very well, where statements
are opened and closed.

I don't think there are any books (as of yet), but the references and
digging through the code should be quite useful. I also recommend
checking out the avatica project (which is a sub project of calcite)
which is used to power the query server.

Also, the query server uses protobufs by default now, so it would
probably be better to use that rather than the JSON api.

On 13/04/2016 10:21 PM, Plamen Paskov wrote:

thanks for your quick and accurate answer ! it's working now!
can you give me a brief explanation of why is it to mantain the state
via the json api so i can better understand how to create a php
wrapper library. if there are some books or references where i can
read more about apache phoenix will be very helpful.
thanks

On 13.04.2016 13:29, F21 wrote:

Your PrepareAndExecute request is missing a statementId:
https://calcite.apache.org/docs/avatica_json_reference.html#prepareandexecuterequest 




Before calling PrepareAndExecute, you need to send a CreateStatement
request to the server so that it can give you a statementId. Then,
use that statementId in your PrepareAndExecute request and all should
be fine :)


On 13/04/2016 8:24 PM, Plamen Paskov wrote:

Hi guys,
I just setup apache phoenix 4.7 and set the serialization to JSON.
Now i'm trying to run a select statement but what i receive is this:

{
"response": "executeResults",
"missingStatement": true,
"rpcMetadata": {
"response": "rpcMetadata",
"serverAddress": "ip-172-31-27-198:8765"
},
"results": null
}

My request looks like this:

curl -XPOST -H 'request: {"request":"prepareAndExecute",
"connectionId":"1", "sql":"select * from us_population",
"maxRowCount":-1}' http://52.31.63.96:8765/

Running the select above from the command line is fine and it
returns 2 rows :

sqlline version 1.1.8
0: jdbc:phoenix:localhost> select * from us_population;
+---+--+--+ 



| STATE | CITY | POPULATION |
+---+--+--+ 



| CA | Los Angeles | 3844829 |
| NY | New York | 8143197 |
+---+--+--+ 



2 rows selected (0.087 seconds)

Can you give me some direction what i'm doing wrong as i'm not java
dev and it's not possible for me to read and understand the source
code.

Thanks in advance !












create table like syntax

2016-04-14 Thread luohui20001
Hi guys As I know create table a like b syntax should have been supported 
since long before(refer to https://issues.apache.org/jira/browse/PHOENIX-734) 
,however when i am using phoenix 4.5, i got below exception:0: 
jdbc:phoenix:cnzk0,cnzk1,cnzk2> create table debug_visit like visit;Error: 
ERROR 604 (42P00): Syntax error. Mismatched input. Expecting "LPAREN", got 
"like" at line 1, column 26. 
(state=42P00,code=604)org.apache.phoenix.exception.PhoenixParserException: 
ERROR 604 (42P00): Syntax error. Mismatched input. Expecting "LPAREN", got 
"like" at line 1, column 26.at 
org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
 at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)   
 at 
org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1224)
   at 
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1305)
  at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1368) at 
sqlline.Commands.execute(Commands.java:822)  at 
sqlline.Commands.sql(Commands.java:732)  at 
sqlline.SqlLine.dispatch(SqlLine.java:808)   at 
sqlline.SqlLine.begin(SqlLine.java:681)  at 
sqlline.SqlLine.start(SqlLine.java:398)  at 
sqlline.SqlLine.main(SqlLine.java:292)Caused by: 
MismatchedTokenException(86!=90)at 
org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:349)
  at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)  
at 
org.apache.phoenix.parse.PhoenixSQLParser.create_table_node(PhoenixSQLParser.java:1032)
  at 
org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:782)
at 
org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:497)  
 at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)




















... 9 more
 So I checked the grammar page and found  no "like" semantic there. Is 
there a quick way to do the same thing?  Thank you.



 

ThanksBest regards!
San.Luo


Re: apache phoenix json api

2016-04-14 Thread Plamen Paskov

@Josh: thanks for your answer.

Folks,
I'm trying to prepare and execute batch request with no luck.
These are the requests i send:

{
  "request": "openConnection",
  "connectionId": "2"
}

{
  "request": "createStatement",
  "connectionId": "2"
}

{
  "request": "prepareAndExecuteBatch",
  "connectionId": "2",
  "statementId": 1,
  "sqlCommands", [ "UPSERT INTO us_population(STATE,CITY,POPULATION) 
VALUES('C1','City 1',10)", "UPSERT INTO 
us_population(STATE,CITY,POPULATION) VALUES('C2','City 2',100)" ]

}

And this is the response i receive:



content="text/html;charset=ISO-8859-1"/>

Error 500 


HTTP ERROR: 500
Problem accessing /. Reason:

 com.fasterxml.jackson.core.JsonParseException: 
Unexpected character (',' (code 44)): was expecting a colon to separate 
field name and value

 at [Source: java.io.StringReader@41709697; line: 5, column: 17]



Powered by Jetty://







On 13.04.2016 19:27, Josh Elser wrote:
For reference materials: definitely check out 
https://calcite.apache.org/avatica/


While JSON is easy to get started with, there are zero guarantees on 
compatibility between versions. If you use protobuf, we should be able 
to hide all schema drift from you as a client (e.g. applications you 
write against Phoenix 4.7 should continue to work against 4.8, 4.9, etc).


Good luck with the PHP client -- feel free to reach out if you have 
more issues. Let us know you have something to shared. I'm sure others 
would also find it very useful.


F21 wrote:

I am currently building a golang client as well, so I've been looking
the api over the last few weeks.

I am not sure about the decision to have to create a statement first,
but in terms of go, it fits the sql package very well, where statements
are opened and closed.

I don't think there are any books (as of yet), but the references and
digging through the code should be quite useful. I also recommend
checking out the avatica project (which is a sub project of calcite)
which is used to power the query server.

Also, the query server uses protobufs by default now, so it would
probably be better to use that rather than the JSON api.

On 13/04/2016 10:21 PM, Plamen Paskov wrote:

thanks for your quick and accurate answer ! it's working now!
can you give me a brief explanation of why is it to mantain the state
via the json api so i can better understand how to create a php
wrapper library. if there are some books or references where i can
read more about apache phoenix will be very helpful.
thanks

On 13.04.2016 13:29, F21 wrote:

Your PrepareAndExecute request is missing a statementId:
https://calcite.apache.org/docs/avatica_json_reference.html#prepareandexecuterequest 




Before calling PrepareAndExecute, you need to send a CreateStatement
request to the server so that it can give you a statementId. Then,
use that statementId in your PrepareAndExecute request and all should
be fine :)


On 13/04/2016 8:24 PM, Plamen Paskov wrote:

Hi guys,
I just setup apache phoenix 4.7 and set the serialization to JSON.
Now i'm trying to run a select statement but what i receive is this:

{
"response": "executeResults",
"missingStatement": true,
"rpcMetadata": {
"response": "rpcMetadata",
"serverAddress": "ip-172-31-27-198:8765"
},
"results": null
}

My request looks like this:

curl -XPOST -H 'request: {"request":"prepareAndExecute",
"connectionId":"1", "sql":"select * from us_population",
"maxRowCount":-1}' http://52.31.63.96:8765/

Running the select above from the command line is fine and it
returns 2 rows :

sqlline version 1.1.8
0: jdbc:phoenix:localhost> select * from us_population;
+---+--+--+ 



| STATE | CITY | POPULATION |
+---+--+--+ 



| CA | Los Angeles | 3844829 |
| NY | New York | 8143197 |
+---+--+--+ 



2 rows selected (0.087 seconds)

Can you give me some direction what i'm doing wrong as i'm not java
dev and it's not possible for me to read and understand the source
code.

Thanks in advance !