[jira] [Created] (PHOENIX-2299) Support CURRENT_DATE() in Pherf data upserts

2015-09-30 Thread James Taylor (JIRA)
James Taylor created PHOENIX-2299:
-

 Summary: Support CURRENT_DATE() in Pherf data upserts
 Key: PHOENIX-2299
 URL: https://issues.apache.org/jira/browse/PHOENIX-2299
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


Just replace the actual date with "NOW" in the xml. Then check the string for 
that value in the generator. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2298) Problem storing with pig on a salted table

2015-09-30 Thread Guillaume salou (JIRA)
Guillaume salou created PHOENIX-2298:


 Summary: Problem storing with pig on a salted table
 Key: PHOENIX-2298
 URL: https://issues.apache.org/jira/browse/PHOENIX-2298
 Project: Phoenix
  Issue Type: Bug
Reporter: Guillaume salou


When I try to upsert via pigStorage on a salted table I get this error.

Store ... using org.apache.phoenix.pig.PhoenixHBaseStorage();

first field of the table :
CurrentTime() asINTERNALTS:datetime,

This date is not used in the primary key of the table.

Works perfectly on a non salted table.

Caused by: java.lang.RuntimeException: Unable to process column _SALT:BINARY, 
innerMessage=org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
(22005): Type mismatch. BINARY cannot be coerced to DATE
at 
org.apache.phoenix.pig.writable.PhoenixPigDBWritable.write(PhoenixPigDBWritable.java:66)
at 
org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:78)
at 
org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:39)
at 
org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:182)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at 
org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:558)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:106)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:268)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.phoenix.schema.ConstraintViolationException: 
org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
mismatch. BINARY cannot be coerced to DATE
at 
org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
at org.apache.phoenix.schema.types.PDate.toObject(PDate.java:77)
at 
org.apache.phoenix.pig.util.TypeUtil.castPigTypeToPhoenix(TypeUtil.java:208)
at 
org.apache.phoenix.pig.writable.PhoenixPigDBWritable.convertTypeSpecificValue(PhoenixPigDBWritable.java:79)
at 
org.apache.phoenix.pig.writable.PhoenixPigDBWritable.write(PhoenixPigDBWritable.java:59)
... 21 more
Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): 
Type mismatch. BINARY cannot be coerced to DATE
at 
org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:68)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133)
... 26 more




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2297) Support jdbcClient instantiation with timeout param & statement.setQueryTimeout method

2015-09-30 Thread Randy Gelhausen (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Gelhausen updated PHOENIX-2297:
-
Description: 
When using Phoenix with Storm's JDBCInsertBolt or NiFi's ExecuteSQL processor, 
the default timeout settings cause Phoenix statements to fail.

With JDBCInsertBolt and JDBCLookupBolt, I've had to set query timeout seconds 
to -1 to get Phoenix statements working. Storm creates a JDBC client with the 
standard "new JdbcClient(connectionProvider, queryTimeoutSecs)" call.

NiFi's ExecuteSQL processor sets a timeout on every statement: 
"statement.setQueryTimeout(queryTimeout)".

Both of these seem to be standard JDBC usage, but fail when using PhoenixDriver.

  was:
When using Phoenix with Storm's JDBCInsertBolt or NiFi's ExecuteSQL processor, 
the default timeout settings cause Phoenix statements to fail.

With JDBCInsertBolt and JDBCLookupBolt, I've had to set query timeout seconds 
to -1 to get Phoenix statements working at all. Storm creates a JDBC client 
with the standard "new JdbcClient(connectionProvider, queryTimeoutSecs)" call.

NiFi's ExecuteSQL processor sets a timeout on every statement: 
"statement.setQueryTimeout(queryTimeout)".

Both of these seem to be standard JDBC usage, but fail when using Phoenix's 
JDBC client.


> Support jdbcClient instantiation with timeout param & 
> statement.setQueryTimeout method
> --
>
> Key: PHOENIX-2297
> URL: https://issues.apache.org/jira/browse/PHOENIX-2297
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Randy Gelhausen
>
> When using Phoenix with Storm's JDBCInsertBolt or NiFi's ExecuteSQL 
> processor, the default timeout settings cause Phoenix statements to fail.
> With JDBCInsertBolt and JDBCLookupBolt, I've had to set query timeout seconds 
> to -1 to get Phoenix statements working. Storm creates a JDBC client with the 
> standard "new JdbcClient(connectionProvider, queryTimeoutSecs)" call.
> NiFi's ExecuteSQL processor sets a timeout on every statement: 
> "statement.setQueryTimeout(queryTimeout)".
> Both of these seem to be standard JDBC usage, but fail when using 
> PhoenixDriver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2297) Support jdbcClient instantiation with timeout param & statement.setQueryTimeout method

2015-09-30 Thread Randy Gelhausen (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Gelhausen updated PHOENIX-2297:
-
Summary: Support jdbcClient instantiation with timeout param & 
statement.setQueryTimeout method  (was: Support standard jdbc setTimeout call)

> Support jdbcClient instantiation with timeout param & 
> statement.setQueryTimeout method
> --
>
> Key: PHOENIX-2297
> URL: https://issues.apache.org/jira/browse/PHOENIX-2297
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Randy Gelhausen
>
> When using Phoenix with Storm's JDBCInsertBolt or NiFi's ExecuteSQL 
> processor, the default timeout settings cause Phoenix statements to fail.
> With JDBCInsertBolt and JDBCLookupBolt, I've had to set query timeout seconds 
> to -1 to get Phoenix statements working at all. Storm creates a JDBC client 
> with the standard "new JdbcClient(connectionProvider, queryTimeoutSecs)" call.
> NiFi's ExecuteSQL processor sets a timeout on every statement: 
> "statement.setQueryTimeout(queryTimeout)".
> Both of these seem to be standard JDBC usage, but fail when using Phoenix's 
> JDBC client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2297) Support standard jdbc setTimeout call

2015-09-30 Thread Randy Gelhausen (JIRA)
Randy Gelhausen created PHOENIX-2297:


 Summary: Support standard jdbc setTimeout call
 Key: PHOENIX-2297
 URL: https://issues.apache.org/jira/browse/PHOENIX-2297
 Project: Phoenix
  Issue Type: Improvement
Reporter: Randy Gelhausen


When using Phoenix with Storm's JDBCInsertBolt or NiFi's ExecuteSQL processor, 
the default timeout settings cause Phoenix statements to fail.

With JDBCInsertBolt and JDBCLookupBolt, I've had to set query timeout seconds 
to -1 to get Phoenix statements working at all. Storm creates a JDBC client 
with the standard "new JdbcClient(connectionProvider, queryTimeoutSecs)" call.

NiFi's ExecuteSQL processor sets a timeout on every statement: 
"statement.setQueryTimeout(queryTimeout)".

Both of these seem to be standard JDBC usage, but fail when using Phoenix's 
JDBC client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2285) phoenix.query.timeoutMs doesn't allow callers to set the timeout to less than 1 second

2015-09-30 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14937948#comment-14937948
 ] 

James Taylor commented on PHOENIX-2285:
---

+1. Looks great - thanks for the contribution, [~jfernando_sfdc]. How about 
testing out your new commit bit and checking this into master, 4.x, and 4.5 
branches?

> phoenix.query.timeoutMs doesn't allow callers to set the timeout to less than 
> 1 second
> --
>
> Key: PHOENIX-2285
> URL: https://issues.apache.org/jira/browse/PHOENIX-2285
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
>Reporter: Jan Fernando
>Assignee: Jan Fernando
> Attachments: PHOENIX-2285-v1.txt, PHOENIX-2285-v2.txt
>
>
> When creating a Phoenix JDBC connection I have a use case where I want to 
> override the default value of phoenix.query.timeoutMs to a value of 200 ms. 
> Currently if you set phoenix.query.timeoutMs to less than 1000 ms, the 
> timeout gets rounded up to 1000ms. This is because in 
> PhoenixStatement.getDefaultQueryTimeout() we convert the value of 
> phoenix.query.timeoutMs to seconds in order to be compliant with JDBC. In 
> BaseResultIterators we then convert it back to millis. As a result of the 
> conversion we loose the millisecond fidelity.
> A possible solution is to store the timeout value stored on the 
> PhoenixStatement in both seconds and milliseconds. Then, in 
> BaseResultIterators when we read the value from the statement we can check if 
> the value exists in millisecond fidelity and if so use that value. Otherwise 
> we would use the value in second granularity and convert. 
> This would allow Phoenix to remain JDBC compatible with second level 
> granularity for setting query timeouts on statements, but allow millisecond 
> granularity of timeouts by explicitly setting phoenix.query.timeoutMs on 
> connection properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2296) Subqueries with in clause on non varchar columns is not working

2015-09-30 Thread Ni la (JIRA)
Ni la created PHOENIX-2296:
--

 Summary: Subqueries with in clause on non varchar columns is not 
working
 Key: PHOENIX-2296
 URL: https://issues.apache.org/jira/browse/PHOENIX-2296
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.5.0, 4.5.2
Reporter: Ni la
Priority: Critical


When using "IN" clause with limit in a sub query, the results are not coming 
correctly. The result is bringing some of the records that are not valid as 
part of the sub query result. 

eg: 
In the given example, the first four(always four records and only on second 
request in the limit) records in the first limit are copied to second page and 
last 4 records are not displayed. 

select ATTR_ID,  NAME from TEST where ID = 289024 and  DIM_ID = 0 and NAME is 
not null and NAME NOT IN  (select NAME from TEST where ID = 289024 and DIM_ID = 
0 and NAME is not null order by NAME limit 0 ) order by NAME limit 10;
+--+--+
| ATTR_ID  | NAME  |
+--+--+
| 289039   | black  
  |
| 292055   | black1 
  |
| 292056   | black10
  |
| 292057   | black100   
  |
| 292058   | black101   
  |
| 292059   | black103   
  |
| 292060   | black11
  |
| 292061   | black12
  |
| 292062   | black13
  |
| 292063   | black14
  |
+--+--+
10 rows selected (1.04 seconds)
select ATTR_ID,  NAME from TEST where ID = 289024 and  DIM_ID = 0 and NAME is 
not null and NAME NOT IN  (select NAME from TEST where ID = 289024 and DIM_ID = 
0 and NAME is not null order by NAME limit 10 ) order by NAME limit 10;
+--+--+
| ATTR_ID  | NAME  |
+--+--+
| 292060   | black11
  |
| 292061   | black12
  |
| 292062   | black13
  |
| 292063   | black14
  |
| 292064   | black15
  |
| 292065   | black16
  |
| 292066   | black17
  |
| 292067   | black18
  |
| 292068   | black19
  |
| 292069   | black2 
  |
+--+--+
10 rows selected (1.683 seconds)








--
This message was sent by Atlassian JIRA
(v6.3.4#6332)