[jira] [Commented] (PHOENIX-3476) Publish "thin-client" artifact

2016-11-11 Thread Randy Gelhausen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658173#comment-15658173
 ] 

Randy Gelhausen commented on PHOENIX-3476:
--

closed and added a comment on PHOENIX-1567

> Publish "thin-client" artifact
> --
>
> Key: PHOENIX-3476
> URL: https://issues.apache.org/jira/browse/PHOENIX-3476
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Randy Gelhausen
>Priority: Minor
>
> Applications like Apache NiFi support plugging in arbitrary JDBC client jars. 
> I would like to use NiFi as a phoenix-queryserver-client but there are no 
> published shaded artifacts:
> When I attempt to use phoenix-queryserver-client-4.8.1-HBase-1.1, I get 
> missing Avatica classes:  java.lang.NoClassDefFoundError: 
> org/apache/calcite/avatica/remote/Driver
> I have downloaded the Phoenix 4.8.1 binaries which include 
> phoenix-4.8.1-HBase-1.1-thin-client.jar, but that artifact is not available 
> on maven central or any other public mirror I could find which makes 
> automated deployment painful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3476) Publish "thin-client" artifact

2016-11-11 Thread Randy Gelhausen (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Gelhausen resolved PHOENIX-3476.
--
Resolution: Duplicate

> Publish "thin-client" artifact
> --
>
> Key: PHOENIX-3476
> URL: https://issues.apache.org/jira/browse/PHOENIX-3476
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Randy Gelhausen
>Priority: Minor
>
> Applications like Apache NiFi support plugging in arbitrary JDBC client jars. 
> I would like to use NiFi as a phoenix-queryserver-client but there are no 
> published shaded artifacts:
> When I attempt to use phoenix-queryserver-client-4.8.1-HBase-1.1, I get 
> missing Avatica classes:  java.lang.NoClassDefFoundError: 
> org/apache/calcite/avatica/remote/Driver
> I have downloaded the Phoenix 4.8.1 binaries which include 
> phoenix-4.8.1-HBase-1.1-thin-client.jar, but that artifact is not available 
> on maven central or any other public mirror I could find which makes 
> automated deployment painful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2016-11-11 Thread Randy Gelhausen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658153#comment-15658153
 ] 

Randy Gelhausen edited comment on PHOENIX-1567 at 11/11/16 8:56 PM:


+1 for revisiting this decision.

Apache NiFi and Apache Zeppelin allow users to supply arbitrary JDBC jars.

Without published artifacts for shaded jars, the end-user needs to:

1. Hit phoenix.apache.org -> Downloads -> Select a mirror -> Download and 
uncompress a tarball
2. Determine whether they'll be using the queryserver
3. Pick the right jar from the litany of extremely similarly named jars
4. Manually copy that jar into NiFi or Zeppelin's local resource directories

It's a lot of work to ask a user to do instead of supplying: 
org.apache.phoenix:phoenix-thin-client:4.8.0-HBase-1.1 (or phoenix-thick).


was (Author: randerzander):
+1 for revisiting this decision.

Apache NiFi and Apache Zeppelin allow users to supply arbitrary JDBC jars.

Without published artifacts for shaded jars, the end-user needs to:

1. Hit phoenix.apache.org -> Downloads -> Select a mirror -> Download and 
uncompress a tarball
2. Determine whether they'll be using the queryserver
3. Pick the right jar from the litany of extremely similarly named jars
4. Manually copy that jar into NiFi or Zeppelin's local resource directories

It's a lot of work to ask a user to do instead of supplying: 
org.apache.phoenix:phoenix-queryserver-client:4.8.0-HBase-1.1 (or phoenix-core).

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2016-11-11 Thread Randy Gelhausen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658153#comment-15658153
 ] 

Randy Gelhausen commented on PHOENIX-1567:
--

+1 for revisiting this decision.

Apache NiFi and Apache Zeppelin allow users to supply arbitrary JDBC jars.

Without published artifacts for shaded jars, the end-user needs to:

1. Hit phoenix.apache.org -> Downloads -> Select a mirror -> Download and 
uncompress a tarball
2. Determine whether they'll be using the queryserver
3. Pick the right jar from the litany of extremely similarly named jars
4. Manually copy that jar into NiFi or Zeppelin's local resource directories

It's a lot of work to ask a user to do instead of supplying: 
org.apache.phoenix:phoenix-queryserver-client:4.8.0-HBase-1.1 (or phoenix-core).

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3476) Publish "thin-client" artifact

2016-11-10 Thread Randy Gelhausen (JIRA)
Randy Gelhausen created PHOENIX-3476:


 Summary: Publish "thin-client" artifact
 Key: PHOENIX-3476
 URL: https://issues.apache.org/jira/browse/PHOENIX-3476
 Project: Phoenix
  Issue Type: Bug
Reporter: Randy Gelhausen
Priority: Minor


Applications like Apache NiFi support plugging in arbitrary JDBC client jars. I 
would like to use NiFi as a phoenix-queryserver-client but there are no 
published shaded artifacts:

When I attempt to use phoenix-queryserver-client-4.8.1-HBase-1.1, I get missing 
Avatica classes:  java.lang.NoClassDefFoundError: 
org/apache/calcite/avatica/remote/Driver

I have downloaded the Phoenix 4.8.1 binaries which include 
phoenix-4.8.1-HBase-1.1-thin-client.jar, but that artifact is not available on 
maven central or any other public mirror I could find which makes automated 
deployment painful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3234) Support selects from parameterized views

2016-08-31 Thread Randy Gelhausen (JIRA)
Randy Gelhausen created PHOENIX-3234:


 Summary: Support selects from parameterized views
 Key: PHOENIX-3234
 URL: https://issues.apache.org/jira/browse/PHOENIX-3234
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Randy Gelhausen


If views are made to support subqueries, there will be

Considering the view definition:
create view leads as
select * from leads
join (
  select id from newlead where indexedCol1 = 'a', indexedCol2 = 'b', etc.
) a
where leads.id = a.id

A natural next attempt will be to:
select * from leads_view where indexedCol1 = 1 and indexedCol2 = 2;

It would be very useful to support pushing view predicates down to the view 
definition dynamically, instead of forcing hardcoded values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3233) Support defining a VIEW over selects from subqueries

2016-08-31 Thread Randy Gelhausen (JIRA)
Randy Gelhausen created PHOENIX-3233:


 Summary: Support defining a VIEW over selects from subqueries
 Key: PHOENIX-3233
 URL: https://issues.apache.org/jira/browse/PHOENIX-3233
 Project: Phoenix
  Issue Type: Bug
Reporter: Randy Gelhausen


It's common to make use of indexes in a sub-query for self-joining back to 
un-indexed columns on the same table. For example:

select * from leads
join (
  select id from lead where indexedCol1 = 'a', indexCol2 = 'b', etc.
) a
where leads.id = a.id

Instead of pushing this complex query logic out to app devs, as a cluster 
admin, I'd like to expose a predefined view to the devs instead:

create view leads_view as
select * from leads
join (
  select id from newlead where indexedCol1 = 'a', indexCol2 = 'b', etc.
) a
where leads.id = a.id

So that devs can "select * from leads_view" instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2757) Phoenix Can't Coerce String to Boolean

2016-03-10 Thread Randy Gelhausen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190466#comment-15190466
 ] 

Randy Gelhausen commented on PHOENIX-2757:
--

A customer ran into this issue trying to get the Phoenix driver to convert from 
strings to datetimes.

Since HBase, Hive, Spark and other ecosystem projects handle type conversions 
implicitly, or will accept arbitrary bytes, users would greatly appreciate the 
JDBC driver managing conversions for them.

If the JDBC spec makes this difficult, perhaps there could be a driver 
configuration that enables/disables such a feature.

> Phoenix Can't Coerce String to Boolean
> --
>
> Key: PHOENIX-2757
> URL: https://issues.apache.org/jira/browse/PHOENIX-2757
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Aaron Stephens
>
> In the process of trying to UPSERT rows with Phoenix via Nifi, I've run into 
> the following:
> {noformat}
> org.apache.phoenix.schema.ConstraintViolationException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
>  ~[na:na]
> at 
> org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na]
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146)
>  ~[nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
>  ~[na:na]
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[na:na]
> ... 20 common frames omitted
> {noformat}
> It appears that Phoenix currently does not know how to coerce a String into a 
> Boolean (see 
> [here|https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PBoolean.java#L124-L137]).
>   This is a feature that's present in other drivers such as PostgreSQL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2648) Phoenix Spark Integration does not allow Dynamic Columns to be mapped

2016-02-04 Thread Randy Gelhausen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132096#comment-15132096
 ] 

Randy Gelhausen commented on PHOENIX-2648:
--

Hi [~suman_d123], as a workaround, I suggest trying to create a view 
(http://phoenix.apache.org/views.html) that makes your dynamic columns appear 
as part of a normal table definition.

> Phoenix Spark Integration does not allow Dynamic Columns to be mapped
> -
>
> Key: PHOENIX-2648
> URL: https://issues.apache.org/jira/browse/PHOENIX-2648
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: phoenix-spark-4.6.0-HBase-0.98  , 
> spark-1.5.0-bin-hadoop2.4
>Reporter: Suman Datta
>  Labels: patch, phoenixTableAsRDD, spark
> Fix For: 4.6.0
>
>
> I am using spark-1.5.0-bin-hadoop2.4 and phoenix-spark-4.6.0-HBase-0.98 to 
> load phoenix tables on hbase to Spark RDD. Using the steps in 
> https://phoenix.apache.org/phoenix_spark.html,  I can successfully map 
> standard columns in a table to Phoenix RDD. 
> But my table has some important dynamic columns 
> (https://phoenix.apache.org/dynamic_columns.html) which are not getting 
> mapped to Spark RDD in this process.(using sc.phoenixTableAsRDD)
> This is proving a showstopper for me for using phoenix with spark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2632) Easier Hive->Phoenix data movement

2016-01-27 Thread Randy Gelhausen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120717#comment-15120717
 ] 

Randy Gelhausen commented on PHOENIX-2632:
--

I would like to see this moved into Phoenix in two ways:

1. [~jmahonin] agreed the "create if not exists" snippet would improve the 
existing phoenix-spark API integration. I'll look at opening an additional JIRA 
and submitting a preliminary patch to add it there.

2. I also envision this as a new "executable" module similar to the pre-built 
bulk CSV loading MR job: HADOOP_CLASSPATH=$(hbase mapredcp):/path/to/hbase/conf 
hadoop jar phoenix-4.0.0-incubating-client.jar 
org.apache.phoenix.mapreduce.CsvBulkLoadTool --table EXAMPLE --input 
/data/example.csv

Making the generic "Hive table/query <-> Phoenix" use case bash-scriptable 
opens the door to users who aren't going to write Spark code just to move data 
back and forth between Hive and HBase.

[~elserj] [~jmahonin] I'm happy to add tests and restructure the existing code 
for both 1 and 2, but will need some guidance once you decide yea or nay for 
each.

> Easier Hive->Phoenix data movement
> --
>
> Key: PHOENIX-2632
> URL: https://issues.apache.org/jira/browse/PHOENIX-2632
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Randy Gelhausen
>
> Moving tables or query results from Hive into Phoenix today requires error 
> prone manual schema re-definition inside HBase storage handler properties. 
> Since Hive and Phoenix support near equivalent types, it should be easier for 
> users to pick a Hive table and load it (or derived query results) from it.
> I'm posting this to open design discussion, but also submit my own project 
> https://github.com/randerzander/HiveToPhoenix for consideration as an early 
> solution. It creates a Spark DataFrame from a Hive query, uses Phoenix JDBC 
> to "create if not exists" a Phoenix equivalent table, and uses the 
> phoenix-spark artifact to store the DataFrame into Phoenix.
> I'm eager to get feedback if this is interesting/useful to the Phoenix 
> community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2632) Easier Hive->Phoenix data movement

2016-01-26 Thread Randy Gelhausen (JIRA)
Randy Gelhausen created PHOENIX-2632:


 Summary: Easier Hive->Phoenix data movement
 Key: PHOENIX-2632
 URL: https://issues.apache.org/jira/browse/PHOENIX-2632
 Project: Phoenix
  Issue Type: Improvement
Reporter: Randy Gelhausen


Moving tables or query results from Hive into Phoenix today requires error 
prone manual schema re-definition inside HBase storage handler properties. 

Since Hive and Phoenix support near equivalent types, it should be easier for 
users to pick a Hive table and load it (or derived query results) from it.

I'm posting this to open design discussion, but also submit my own project 
https://github.com/randerzander/HiveToPhoenix for consideration as an early 
solution. It creates a Spark DataFrame from a Hive query, uses Phoenix JDBC to 
"create if not exists" a Phoenix equivalent table, and uses the phoenix-spark 
artifact to store the DataFrame into Phoenix.

I'm eager to get feedback if this is interesting/useful to the Phoenix 
community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2297) Support jdbcClient instantiation with timeout param & statement.setQueryTimeout method

2015-09-30 Thread Randy Gelhausen (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Gelhausen updated PHOENIX-2297:
-
Summary: Support jdbcClient instantiation with timeout param & 
statement.setQueryTimeout method  (was: Support standard jdbc setTimeout call)

> Support jdbcClient instantiation with timeout param & 
> statement.setQueryTimeout method
> --
>
> Key: PHOENIX-2297
> URL: https://issues.apache.org/jira/browse/PHOENIX-2297
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Randy Gelhausen
>
> When using Phoenix with Storm's JDBCInsertBolt or NiFi's ExecuteSQL 
> processor, the default timeout settings cause Phoenix statements to fail.
> With JDBCInsertBolt and JDBCLookupBolt, I've had to set query timeout seconds 
> to -1 to get Phoenix statements working at all. Storm creates a JDBC client 
> with the standard "new JdbcClient(connectionProvider, queryTimeoutSecs)" call.
> NiFi's ExecuteSQL processor sets a timeout on every statement: 
> "statement.setQueryTimeout(queryTimeout)".
> Both of these seem to be standard JDBC usage, but fail when using Phoenix's 
> JDBC client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2297) Support standard jdbc setTimeout call

2015-09-30 Thread Randy Gelhausen (JIRA)
Randy Gelhausen created PHOENIX-2297:


 Summary: Support standard jdbc setTimeout call
 Key: PHOENIX-2297
 URL: https://issues.apache.org/jira/browse/PHOENIX-2297
 Project: Phoenix
  Issue Type: Improvement
Reporter: Randy Gelhausen


When using Phoenix with Storm's JDBCInsertBolt or NiFi's ExecuteSQL processor, 
the default timeout settings cause Phoenix statements to fail.

With JDBCInsertBolt and JDBCLookupBolt, I've had to set query timeout seconds 
to -1 to get Phoenix statements working at all. Storm creates a JDBC client 
with the standard "new JdbcClient(connectionProvider, queryTimeoutSecs)" call.

NiFi's ExecuteSQL processor sets a timeout on every statement: 
"statement.setQueryTimeout(queryTimeout)".

Both of these seem to be standard JDBC usage, but fail when using Phoenix's 
JDBC client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2297) Support jdbcClient instantiation with timeout param & statement.setQueryTimeout method

2015-09-30 Thread Randy Gelhausen (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randy Gelhausen updated PHOENIX-2297:
-
Description: 
When using Phoenix with Storm's JDBCInsertBolt or NiFi's ExecuteSQL processor, 
the default timeout settings cause Phoenix statements to fail.

With JDBCInsertBolt and JDBCLookupBolt, I've had to set query timeout seconds 
to -1 to get Phoenix statements working. Storm creates a JDBC client with the 
standard "new JdbcClient(connectionProvider, queryTimeoutSecs)" call.

NiFi's ExecuteSQL processor sets a timeout on every statement: 
"statement.setQueryTimeout(queryTimeout)".

Both of these seem to be standard JDBC usage, but fail when using PhoenixDriver.

  was:
When using Phoenix with Storm's JDBCInsertBolt or NiFi's ExecuteSQL processor, 
the default timeout settings cause Phoenix statements to fail.

With JDBCInsertBolt and JDBCLookupBolt, I've had to set query timeout seconds 
to -1 to get Phoenix statements working at all. Storm creates a JDBC client 
with the standard "new JdbcClient(connectionProvider, queryTimeoutSecs)" call.

NiFi's ExecuteSQL processor sets a timeout on every statement: 
"statement.setQueryTimeout(queryTimeout)".

Both of these seem to be standard JDBC usage, but fail when using Phoenix's 
JDBC client.


> Support jdbcClient instantiation with timeout param & 
> statement.setQueryTimeout method
> --
>
> Key: PHOENIX-2297
> URL: https://issues.apache.org/jira/browse/PHOENIX-2297
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Randy Gelhausen
>
> When using Phoenix with Storm's JDBCInsertBolt or NiFi's ExecuteSQL 
> processor, the default timeout settings cause Phoenix statements to fail.
> With JDBCInsertBolt and JDBCLookupBolt, I've had to set query timeout seconds 
> to -1 to get Phoenix statements working. Storm creates a JDBC client with the 
> standard "new JdbcClient(connectionProvider, queryTimeoutSecs)" call.
> NiFi's ExecuteSQL processor sets a timeout on every statement: 
> "statement.setQueryTimeout(queryTimeout)".
> Both of these seem to be standard JDBC usage, but fail when using 
> PhoenixDriver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2196) phoenix-spark should automatically convert DataFrame field names to all caps

2015-09-03 Thread Randy Gelhausen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14729178#comment-14729178
 ] 

Randy Gelhausen commented on PHOENIX-2196:
--

Makes sense for the column rules.

I built an application (https://github.com/randerzander/CSV-to-Phoenix) for 
allowing users to turn CSV files into tables.

However, users don't know to capitalize their table _names_. Shouldn't the same 
normalization logic be applied to the user supplied _table name_ as well as to 
the column names in the CSV headers? If so, can we do that on this JIRA or do 
we need to create another?

> phoenix-spark should automatically convert DataFrame field names to all caps
> 
>
> Key: PHOENIX-2196
> URL: https://issues.apache.org/jira/browse/PHOENIX-2196
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Randy Gelhausen
>Assignee: Josh Mahonin
>Priority: Minor
> Attachments: PHOENIX-2196.patch
>
>
> phoenix-spark will fail to save a DF into a Phoenix table if the DataFrame's 
> fields are not all capitalized. Since Phoenix internally capitalizes all 
> column names, the DataFrame.save method should automatically capitalize DF 
> field names as a convenience to the end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2196) phoenix-spark should automatically convert DataFrame field names to all caps

2015-09-03 Thread Randy Gelhausen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14729081#comment-14729081
 ] 

Randy Gelhausen commented on PHOENIX-2196:
--

Apologies all, haven't had the chance to do a test with a 4.6 instance of 
Phoenix.

I did notice a similar issue with recognizing lowercase tablenames. Is it 
possible to do the same conversion rules there?

> phoenix-spark should automatically convert DataFrame field names to all caps
> 
>
> Key: PHOENIX-2196
> URL: https://issues.apache.org/jira/browse/PHOENIX-2196
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Randy Gelhausen
>Assignee: Josh Mahonin
>Priority: Minor
> Attachments: PHOENIX-2196.patch
>
>
> phoenix-spark will fail to save a DF into a Phoenix table if the DataFrame's 
> fields are not all capitalized. Since Phoenix internally capitalizes all 
> column names, the DataFrame.save method should automatically capitalize DF 
> field names as a convenience to the end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2196) phoenix-spark should automatically convert DataFrame field names to all caps

2015-08-23 Thread Randy Gelhausen (JIRA)
Randy Gelhausen created PHOENIX-2196:


 Summary: phoenix-spark should automatically convert DataFrame 
field names to all caps
 Key: PHOENIX-2196
 URL: https://issues.apache.org/jira/browse/PHOENIX-2196
 Project: Phoenix
  Issue Type: Improvement
Reporter: Randy Gelhausen
Priority: Minor


phoenix-spark will fail to save a DF into a Phoenix table if the DataFrame's 
fields are not all capitalized. Since Phoenix internally capitalizes all column 
names, the DataFrame.save method should automatically capitalize DF field names 
as a convenience to the end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)