Re: master vs. 4.x-HBase-1.1

2016-01-27 Thread James Taylor
Already deleted, but thanks anyway, Nick.

On Wed, Jan 27, 2016 at 9:56 PM, Nick Dimiduk  wrote:

> I believe the restriction has been released. I'll delete this branch
> tomorrow unless someone says otherwise.
>
> Thanks,
> Nick
>
> On Tue, Dec 22, 2015 at 9:03 PM, James Taylor 
> wrote:
>
>> Thanks for the update, Ram. Would you mind following up with INFRA to get
>> a
>> more definitive answer?
>>
>> On Tuesday, December 22, 2015, Vasudevan, Ramkrishna S <
>> ramkrishna.s.vasude...@intel.com> wrote:
>>
>> > They have closed INFRA-10920 and pointed me to
>> > https://issues.apache.org/jira/browse/INFRA-10800.  But there also
>> there
>> > is no conclusion on it and I think the ETA for deleting a branch is not
>> yet
>> > finalized. I am not aware of the background as why it was stopped.
>> >
>> >
>> >
>> > Regards
>> >
>> > Ram
>> >
>> >
>> >
>> > *From:* James Taylor [mailto:jamestay...@apache.org
>> > ]
>> > *Sent:* Wednesday, December 23, 2015 2:52 AM
>> > *To:* dev@phoenix.apache.org
>> > ; Vasudevan,
>> > Ramkrishna S
>> > *Subject:* Re: master vs. 4.x-HBase-1.1
>> >
>> >
>> >
>> > Yes, that was pushed in error.  INFRA-10920 was filed to remove it, but
>> > now it's closed with a link to another JIRA that links more JIRAs - it's
>> > unclear what the resolution is.
>> >
>> >
>> >
>> > Ram - would you mind following up on that?
>> >
>> >
>> >
>> > Thanks,
>> >
>> > James
>> >
>> >
>> >
>> > On Tue, Dec 22, 2015 at 1:12 PM, Nick Dimiduk > > > wrote:
>> >
>> > Heya,
>> >
>> > I see we now have a branch 4.x-HBase-1.1. It's pom version is
>> > 4.5.0-HBase-1.1-SNAPSHOT. Was this pushed in error?
>> >
>> > -n
>> >
>> >
>> >
>>
>
>


[jira] [Commented] (PHOENIX-2542) CSV bulk loading with --schema option is broken

2016-01-27 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120973#comment-15120973
 ] 

James Taylor commented on PHOENIX-2542:
---

Thanks for the fix, [~maghamravikiran] - please commit to 4.x and master 
branches.

> CSV bulk loading with --schema option is broken
> ---
>
> Key: PHOENIX-2542
> URL: https://issues.apache.org/jira/browse/PHOENIX-2542
> Project: Phoenix
>  Issue Type: Bug
> Environment: Current master branch / HBase 1.1.2
>Reporter: YoungWoo Kim
>Assignee: maghamravikiran
> Attachments: PHOENIX-2542.patch
>
>
> My bulk load command looks like this:
> {code}
> HADOOP_CLASSPATH=/usr/lib/hbase/hbase-protocol.jar:/etc/hbase/conf/ hadoop 
> jar /usr/lib/phoenix/phoenix-client.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool ${HADOOP_MR_RUNTIME_OPTS} 
> --schema MYSCHEMA --table MYTABLE --input /path/to/id=2015121800/* -d 
> $'\001'
> {code}
> Got errors as following:
> {noformat}
> 15/12/21 11:47:40 INFO mapreduce.Job: Task Id : 
> attempt_1450018293185_0952_m_04_2, Status : FAILED
> Error: java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=MYTABLE
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:170)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:61)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=MYTABLE
>   at com.google.common.base.Throwables.propagate(Throwables.java:156)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper$MapperUpsertListener.errorOnRecord(FormatToKeyValueMapper.java:246)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:92)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:44)
>   at 
> org.apache.phoenix.util.UpsertExecutor.execute(UpsertExecutor.java:133)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:147)
>   ... 9 more
> Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
> (42M03): Table undefined. tableName=MYTABLE
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:436)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:285)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:249)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:289)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:566)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:245)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:84)
>   ... 12 more
> {noformat}
> My table MYSCHEMA.MYTABLE exists but bulk load tool does not recognize my 
> schema name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2542) CSV bulk loading with --schema option is broken

2016-01-27 Thread YoungWoo Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120969#comment-15120969
 ] 

YoungWoo Kim commented on PHOENIX-2542:
---

[~maghamraviki...@gmail.com], Thanks for looking into this. I've done a quick 
check your patch on my end and it works fine as 4.6.0!

> CSV bulk loading with --schema option is broken
> ---
>
> Key: PHOENIX-2542
> URL: https://issues.apache.org/jira/browse/PHOENIX-2542
> Project: Phoenix
>  Issue Type: Bug
> Environment: Current master branch / HBase 1.1.2
>Reporter: YoungWoo Kim
>Assignee: maghamravikiran
> Attachments: PHOENIX-2542.patch
>
>
> My bulk load command looks like this:
> {code}
> HADOOP_CLASSPATH=/usr/lib/hbase/hbase-protocol.jar:/etc/hbase/conf/ hadoop 
> jar /usr/lib/phoenix/phoenix-client.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool ${HADOOP_MR_RUNTIME_OPTS} 
> --schema MYSCHEMA --table MYTABLE --input /path/to/id=2015121800/* -d 
> $'\001'
> {code}
> Got errors as following:
> {noformat}
> 15/12/21 11:47:40 INFO mapreduce.Job: Task Id : 
> attempt_1450018293185_0952_m_04_2, Status : FAILED
> Error: java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=MYTABLE
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:170)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:61)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=MYTABLE
>   at com.google.common.base.Throwables.propagate(Throwables.java:156)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper$MapperUpsertListener.errorOnRecord(FormatToKeyValueMapper.java:246)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:92)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:44)
>   at 
> org.apache.phoenix.util.UpsertExecutor.execute(UpsertExecutor.java:133)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:147)
>   ... 9 more
> Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
> (42M03): Table undefined. tableName=MYTABLE
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:436)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:285)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:249)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:289)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:566)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:245)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:84)
>   ... 12 more
> {noformat}
> My table MYSCHEMA.MYTABLE exists but bulk load tool does not recognize my 
> schema name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2542) CSV bulk loading with --schema option is broken

2016-01-27 Thread Gabriel Reid (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120964#comment-15120964
 ] 

Gabriel Reid commented on PHOENIX-2542:
---

+1, looks good, played around with it a little bit and it seems to work as it 
should

> CSV bulk loading with --schema option is broken
> ---
>
> Key: PHOENIX-2542
> URL: https://issues.apache.org/jira/browse/PHOENIX-2542
> Project: Phoenix
>  Issue Type: Bug
> Environment: Current master branch / HBase 1.1.2
>Reporter: YoungWoo Kim
>Assignee: maghamravikiran
> Attachments: PHOENIX-2542.patch
>
>
> My bulk load command looks like this:
> {code}
> HADOOP_CLASSPATH=/usr/lib/hbase/hbase-protocol.jar:/etc/hbase/conf/ hadoop 
> jar /usr/lib/phoenix/phoenix-client.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool ${HADOOP_MR_RUNTIME_OPTS} 
> --schema MYSCHEMA --table MYTABLE --input /path/to/id=2015121800/* -d 
> $'\001'
> {code}
> Got errors as following:
> {noformat}
> 15/12/21 11:47:40 INFO mapreduce.Job: Task Id : 
> attempt_1450018293185_0952_m_04_2, Status : FAILED
> Error: java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=MYTABLE
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:170)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:61)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=MYTABLE
>   at com.google.common.base.Throwables.propagate(Throwables.java:156)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper$MapperUpsertListener.errorOnRecord(FormatToKeyValueMapper.java:246)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:92)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:44)
>   at 
> org.apache.phoenix.util.UpsertExecutor.execute(UpsertExecutor.java:133)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:147)
>   ... 9 more
> Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
> (42M03): Table undefined. tableName=MYTABLE
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:436)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:285)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:249)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:289)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:566)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:245)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:84)
>   ... 12 more
> {noformat}
> My table MYSCHEMA.MYTABLE exists but bulk load tool does not recognize my 
> schema name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2542) CSV bulk loading with --schema option is broken

2016-01-27 Thread maghamravikiran (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maghamravikiran updated PHOENIX-2542:
-
Attachment: PHOENIX-2542.patch

[~jamestaylor], [~gabriel.reid]
   Can you please review the patch. 

> CSV bulk loading with --schema option is broken
> ---
>
> Key: PHOENIX-2542
> URL: https://issues.apache.org/jira/browse/PHOENIX-2542
> Project: Phoenix
>  Issue Type: Bug
> Environment: Current master branch / HBase 1.1.2
>Reporter: YoungWoo Kim
>Assignee: maghamravikiran
> Attachments: PHOENIX-2542.patch
>
>
> My bulk load command looks like this:
> {code}
> HADOOP_CLASSPATH=/usr/lib/hbase/hbase-protocol.jar:/etc/hbase/conf/ hadoop 
> jar /usr/lib/phoenix/phoenix-client.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool ${HADOOP_MR_RUNTIME_OPTS} 
> --schema MYSCHEMA --table MYTABLE --input /path/to/id=2015121800/* -d 
> $'\001'
> {code}
> Got errors as following:
> {noformat}
> 15/12/21 11:47:40 INFO mapreduce.Job: Task Id : 
> attempt_1450018293185_0952_m_04_2, Status : FAILED
> Error: java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=MYTABLE
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:170)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:61)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=MYTABLE
>   at com.google.common.base.Throwables.propagate(Throwables.java:156)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper$MapperUpsertListener.errorOnRecord(FormatToKeyValueMapper.java:246)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:92)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:44)
>   at 
> org.apache.phoenix.util.UpsertExecutor.execute(UpsertExecutor.java:133)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:147)
>   ... 9 more
> Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
> (42M03): Table undefined. tableName=MYTABLE
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:436)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:285)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:249)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:289)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:566)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:245)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:84)
>   ... 12 more
> {noformat}
> My table MYSCHEMA.MYTABLE exists but bulk load tool does not recognize my 
> schema name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2006) py scripts support for printing its command

2016-01-27 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated PHOENIX-2006:
--
Summary: py scripts support for printing its command  (was: queryserver.py 
support for printing its command)

I found myself wanting of this for debugging client classpath/hbase-site.xml 
resolution with sqlline.py. Let's fix it across the board.

> py scripts support for printing its command
> ---
>
> Key: PHOENIX-2006
> URL: https://issues.apache.org/jira/browse/PHOENIX-2006
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2006.00.patch
>
>
> {{zkServer.sh}} accepts the command {{print-cmd}}, for printing out the java 
> command it would launch. This is pretty handy! Let's reproduce it in 
> {{queryserver.py}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: master vs. 4.x-HBase-1.1

2016-01-27 Thread Nick Dimiduk
I believe the restriction has been released. I'll delete this branch
tomorrow unless someone says otherwise.

Thanks,
Nick

On Tue, Dec 22, 2015 at 9:03 PM, James Taylor 
wrote:

> Thanks for the update, Ram. Would you mind following up with INFRA to get a
> more definitive answer?
>
> On Tuesday, December 22, 2015, Vasudevan, Ramkrishna S <
> ramkrishna.s.vasude...@intel.com> wrote:
>
> > They have closed INFRA-10920 and pointed me to
> > https://issues.apache.org/jira/browse/INFRA-10800.  But there also there
> > is no conclusion on it and I think the ETA for deleting a branch is not
> yet
> > finalized. I am not aware of the background as why it was stopped.
> >
> >
> >
> > Regards
> >
> > Ram
> >
> >
> >
> > *From:* James Taylor [mailto:jamestay...@apache.org
> > ]
> > *Sent:* Wednesday, December 23, 2015 2:52 AM
> > *To:* dev@phoenix.apache.org
> > ; Vasudevan,
> > Ramkrishna S
> > *Subject:* Re: master vs. 4.x-HBase-1.1
> >
> >
> >
> > Yes, that was pushed in error.  INFRA-10920 was filed to remove it, but
> > now it's closed with a link to another JIRA that links more JIRAs - it's
> > unclear what the resolution is.
> >
> >
> >
> > Ram - would you mind following up on that?
> >
> >
> >
> > Thanks,
> >
> > James
> >
> >
> >
> > On Tue, Dec 22, 2015 at 1:12 PM, Nick Dimiduk  > > wrote:
> >
> > Heya,
> >
> > I see we now have a branch 4.x-HBase-1.1. It's pom version is
> > 4.5.0-HBase-1.1-SNAPSHOT. Was this pushed in error?
> >
> > -n
> >
> >
> >
>


[jira] [Commented] (PHOENIX-2632) Easier Hive->Phoenix data movement

2016-01-27 Thread Randy Gelhausen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120717#comment-15120717
 ] 

Randy Gelhausen commented on PHOENIX-2632:
--

I would like to see this moved into Phoenix in two ways:

1. [~jmahonin] agreed the "create if not exists" snippet would improve the 
existing phoenix-spark API integration. I'll look at opening an additional JIRA 
and submitting a preliminary patch to add it there.

2. I also envision this as a new "executable" module similar to the pre-built 
bulk CSV loading MR job: HADOOP_CLASSPATH=$(hbase mapredcp):/path/to/hbase/conf 
hadoop jar phoenix-4.0.0-incubating-client.jar 
org.apache.phoenix.mapreduce.CsvBulkLoadTool --table EXAMPLE --input 
/data/example.csv

Making the generic "Hive table/query <-> Phoenix" use case bash-scriptable 
opens the door to users who aren't going to write Spark code just to move data 
back and forth between Hive and HBase.

[~elserj] [~jmahonin] I'm happy to add tests and restructure the existing code 
for both 1 and 2, but will need some guidance once you decide yea or nay for 
each.

> Easier Hive->Phoenix data movement
> --
>
> Key: PHOENIX-2632
> URL: https://issues.apache.org/jira/browse/PHOENIX-2632
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Randy Gelhausen
>
> Moving tables or query results from Hive into Phoenix today requires error 
> prone manual schema re-definition inside HBase storage handler properties. 
> Since Hive and Phoenix support near equivalent types, it should be easier for 
> users to pick a Hive table and load it (or derived query results) from it.
> I'm posting this to open design discussion, but also submit my own project 
> https://github.com/randerzander/HiveToPhoenix for consideration as an early 
> solution. It creates a Spark DataFrame from a Hive query, uses Phoenix JDBC 
> to "create if not exists" a Phoenix equivalent table, and uses the 
> phoenix-spark artifact to store the DataFrame into Phoenix.
> I'm eager to get feedback if this is interesting/useful to the Phoenix 
> community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2636) Figure out a work around for java.lang.NoSuchFieldError: in when compiling against HBase < 0.98.17

2016-01-27 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120601#comment-15120601
 ] 

Andrew Purtell commented on PHOENIX-2636:
-

Just mentioned on the other issue that cut-paste works too. Reflection based 
approaches would be fragile in a different way. Suggest you watch the upstream 
classes for evolution though. As in this case HBase might fix a data loss bug 
against some version of HDFS but Phoenix would still be vulnerable. 

> Figure out a work around for java.lang.NoSuchFieldError: in when compiling 
> against HBase < 0.98.17
> --
>
> Key: PHOENIX-2636
> URL: https://issues.apache.org/jira/browse/PHOENIX-2636
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Critical
>
> Working on PHOENIX-2629 revealed that when compiling against an HBase version 
> prior to 0.98.17 and running against 0.98.17, region assignments fails to 
> complete because of the error:
> {code}
> java.lang.NoSuchFieldError: in
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$PhoenixBaseDecoder.(IndexedWALEditCodec.java:111)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueDecoder.(IndexedWALEditCodec.java:126)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:68)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:253)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2629) NoClassDef error for BaseDecoder$PBIS on log replay

2016-01-27 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120599#comment-15120599
 ] 

Andrew Purtell commented on PHOENIX-2629:
-

Just be sure to track the source classes for possible evolution As in this 
case, HBase might fix a data loss bug on some version of HDFS but Phoenix would 
still be vulnerable. Maybe we should sync up PMC to PMC at every release and go 
over the change logs?

> NoClassDef error for BaseDecoder$PBIS on log replay
> ---
>
> Key: PHOENIX-2629
> URL: https://issues.apache.org/jira/browse/PHOENIX-2629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Attachments: PHOENIX-2629.patch, PHOENIX-2629_v2.patch, 
> PHOENIX-2629_v3.patch, PHOENIX-2629_v4.patch
>
>
> HBase version 0.98.13 with Phoenix 4.7.0-RC0
> {code}
> executor.EventHandler - Caught throwable while processing event RS_LOG_REPLAY
> java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/codec/BaseDecoder$PBIS
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:63)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:254)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.codec.BaseDecoder$PBIS
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 15 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2629) NoClassDef error for BaseDecoder$PBIS on log replay

2016-01-27 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120597#comment-15120597
 ] 

Andrew Purtell commented on PHOENIX-2629:
-

FWIW copy-paste seems an ok approach to me. Reflection based approaches might 
work but would be fragile in a different way. 

> NoClassDef error for BaseDecoder$PBIS on log replay
> ---
>
> Key: PHOENIX-2629
> URL: https://issues.apache.org/jira/browse/PHOENIX-2629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Attachments: PHOENIX-2629.patch, PHOENIX-2629_v2.patch, 
> PHOENIX-2629_v3.patch, PHOENIX-2629_v4.patch
>
>
> HBase version 0.98.13 with Phoenix 4.7.0-RC0
> {code}
> executor.EventHandler - Caught throwable while processing event RS_LOG_REPLAY
> java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/codec/BaseDecoder$PBIS
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:63)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:254)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.codec.BaseDecoder$PBIS
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 15 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2636) Figure out a work around for java.lang.NoSuchFieldError: in when compiling against HBase < 0.98.17

2016-01-27 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120576#comment-15120576
 ] 

Andrew Purtell edited comment on PHOENIX-2636 at 1/28/16 1:55 AM:
--

No, our internal version is 0.98.16 plus Enis's patch, so there is no 
prevention of any upgrading. We should talk more in house. :-)

Edit: Also, you may or may not be aware our internal version of Phoenix is 
harmonized with the HBase version and all components in the stack are 
recompiled against each other for every release. We found the 
ClassCastException in release validation, cherry picked the fix from upstream, 
and proceeded without further incident. Coprocessors should be thought of like 
Linux kernel modules and managed the same way. 

I would also suggest because of the decoupled nature of the whole Hadoop 
ecosystem and pace of change, binary convenience artifacts are not really that 
convenient. HBase binary artifacts are of limited utility as they ship because 
it's probable the user is running a different version of Hadoop than the 
default for the build. This is true to some extent up and down the stack. Some 
projects resort to shims. I think the whole ecosystem would ultimately do users 
a favor by switching to source only releases. It won't happen but it should. 
Let Apache Bigtop handle the heavy lifting of producing sets of binary 
artifacts known to integrate cleanly. 


was (Author: apurtell):
No, our internal version is 0.98.16 plus Enis's patch, so there is no 
prevention of any upgrading. We should talk more in house. :-)

> Figure out a work around for java.lang.NoSuchFieldError: in when compiling 
> against HBase < 0.98.17
> --
>
> Key: PHOENIX-2636
> URL: https://issues.apache.org/jira/browse/PHOENIX-2636
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Critical
>
> Working on PHOENIX-2629 revealed that when compiling against an HBase version 
> prior to 0.98.17 and running against 0.98.17, region assignments fails to 
> complete because of the error:
> {code}
> java.lang.NoSuchFieldError: in
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$PhoenixBaseDecoder.(IndexedWALEditCodec.java:111)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueDecoder.(IndexedWALEditCodec.java:126)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:68)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:253)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2636) Figure out a work around for java.lang.NoSuchFieldError: in when compiling against HBase < 0.98.17

2016-01-27 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120585#comment-15120585
 ] 

Andrew Purtell commented on PHOENIX-2636:
-

I suspect this like PHOENIX-2629 can be handled with reflection, if there's 
interest in making older releases binary compatible with newer versions of the 
WAL code. Would be patch releases putting reflection in place to work with both 
older and newer versions. 

> Figure out a work around for java.lang.NoSuchFieldError: in when compiling 
> against HBase < 0.98.17
> --
>
> Key: PHOENIX-2636
> URL: https://issues.apache.org/jira/browse/PHOENIX-2636
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Critical
>
> Working on PHOENIX-2629 revealed that when compiling against an HBase version 
> prior to 0.98.17 and running against 0.98.17, region assignments fails to 
> complete because of the error:
> {code}
> java.lang.NoSuchFieldError: in
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$PhoenixBaseDecoder.(IndexedWALEditCodec.java:111)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueDecoder.(IndexedWALEditCodec.java:126)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:68)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:253)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2636) Figure out a work around for java.lang.NoSuchFieldError: in when compiling against HBase < 0.98.17

2016-01-27 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120576#comment-15120576
 ] 

Andrew Purtell commented on PHOENIX-2636:
-

No, our internal version is 0.98.16 plus Enis's patch, so there is no 
prevention of any upgrading. We should talk more in house. :-)

> Figure out a work around for java.lang.NoSuchFieldError: in when compiling 
> against HBase < 0.98.17
> --
>
> Key: PHOENIX-2636
> URL: https://issues.apache.org/jira/browse/PHOENIX-2636
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Critical
>
> Working on PHOENIX-2629 revealed that when compiling against an HBase version 
> prior to 0.98.17 and running against 0.98.17, region assignments fails to 
> complete because of the error:
> {code}
> java.lang.NoSuchFieldError: in
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$PhoenixBaseDecoder.(IndexedWALEditCodec.java:111)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueDecoder.(IndexedWALEditCodec.java:126)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:68)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:253)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2629) NoClassDef error for BaseDecoder$PBIS on log replay

2016-01-27 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-2629:
--
Attachment: PHOENIX-2629_v4.patch

Turns out using the copy pasted version of 
org.apache.hadoop.hbase.codec.BaseDecoder when runtime of HBase is 0.98.17 
fixes the other issue (PHOENIX-2636) too. 

> NoClassDef error for BaseDecoder$PBIS on log replay
> ---
>
> Key: PHOENIX-2629
> URL: https://issues.apache.org/jira/browse/PHOENIX-2629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Attachments: PHOENIX-2629.patch, PHOENIX-2629_v2.patch, 
> PHOENIX-2629_v3.patch, PHOENIX-2629_v4.patch
>
>
> HBase version 0.98.13 with Phoenix 4.7.0-RC0
> {code}
> executor.EventHandler - Caught throwable while processing event RS_LOG_REPLAY
> java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/codec/BaseDecoder$PBIS
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:63)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:254)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.codec.BaseDecoder$PBIS
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 15 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2636) Figure out a work around for java.lang.NoSuchFieldError: in when compiling against HBase < 0.98.17

2016-01-27 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120514#comment-15120514
 ] 

James Taylor commented on PHOENIX-2636:
---

+1 on upgrading our pom to compile against 0.98.17. We can work out the 
0.98.16.1 issues on our internal fork (or better yet, move to 0.98.17).

> Figure out a work around for java.lang.NoSuchFieldError: in when compiling 
> against HBase < 0.98.17
> --
>
> Key: PHOENIX-2636
> URL: https://issues.apache.org/jira/browse/PHOENIX-2636
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Critical
>
> Working on PHOENIX-2629 revealed that when compiling against an HBase version 
> prior to 0.98.17 and running against 0.98.17, region assignments fails to 
> complete because of the error:
> {code}
> java.lang.NoSuchFieldError: in
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$PhoenixBaseDecoder.(IndexedWALEditCodec.java:111)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueDecoder.(IndexedWALEditCodec.java:126)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:68)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:253)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2632) Easier Hive->Phoenix data movement

2016-01-27 Thread Josh Mahonin (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120500#comment-15120500
 ] 

Josh Mahonin commented on PHOENIX-2632:
---

This looks pretty neat [~rgelhau]

I bet there's a way to take your 'CREATE TABLE IF NOT EXISTS' functionality 
could be wrapped into the existing Spark DataFrame code, and be made to use for 
the SaveMode.Ignore option [1]. Right now it only supports SaveMode.Overwrite, 
which assumes the table is setup already.

Once that's in, I think the Hive->Phoenix functionality becomes a documentation 
exercise: show to to setup the Hive table as a DataFrame, then invoke 
df.save("org.apache.phoenix.spark"...) on it.

[1] http://spark.apache.org/docs/latest/sql-programming-guide.html



> Easier Hive->Phoenix data movement
> --
>
> Key: PHOENIX-2632
> URL: https://issues.apache.org/jira/browse/PHOENIX-2632
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Randy Gelhausen
>
> Moving tables or query results from Hive into Phoenix today requires error 
> prone manual schema re-definition inside HBase storage handler properties. 
> Since Hive and Phoenix support near equivalent types, it should be easier for 
> users to pick a Hive table and load it (or derived query results) from it.
> I'm posting this to open design discussion, but also submit my own project 
> https://github.com/randerzander/HiveToPhoenix for consideration as an early 
> solution. It creates a Spark DataFrame from a Hive query, uses Phoenix JDBC 
> to "create if not exists" a Phoenix equivalent table, and uses the 
> phoenix-spark artifact to store the DataFrame into Phoenix.
> I'm eager to get feedback if this is interesting/useful to the Phoenix 
> community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2632) Easier Hive->Phoenix data movement

2016-01-27 Thread Josh Mahonin (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120500#comment-15120500
 ] 

Josh Mahonin edited comment on PHOENIX-2632 at 1/28/16 12:27 AM:
-

This looks pretty neat [~rgelhau]

I bet there's a way to take your 'CREATE TABLE IF NOT EXISTS' functionality and 
wrap it into the existing Spark DataFrame code, which could be made to use the 
SaveMode.Ignore option [1]. Right now it only supports SaveMode.Overwrite, 
which assumes the table is created already.

Once that's in, I think the Hive->Phoenix functionality becomes a documentation 
exercise: show to to setup the Hive table as a DataFrame, then invoke 
df.save("org.apache.phoenix.spark"...) on it.

[1] http://spark.apache.org/docs/latest/sql-programming-guide.html




was (Author: jmahonin):
This looks pretty neat [~rgelhau]

I bet there's a way to take your 'CREATE TABLE IF NOT EXISTS' functionality 
could be wrapped into the existing Spark DataFrame code, and be made to use for 
the SaveMode.Ignore option [1]. Right now it only supports SaveMode.Overwrite, 
which assumes the table is setup already.

Once that's in, I think the Hive->Phoenix functionality becomes a documentation 
exercise: show to to setup the Hive table as a DataFrame, then invoke 
df.save("org.apache.phoenix.spark"...) on it.

[1] http://spark.apache.org/docs/latest/sql-programming-guide.html



> Easier Hive->Phoenix data movement
> --
>
> Key: PHOENIX-2632
> URL: https://issues.apache.org/jira/browse/PHOENIX-2632
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Randy Gelhausen
>
> Moving tables or query results from Hive into Phoenix today requires error 
> prone manual schema re-definition inside HBase storage handler properties. 
> Since Hive and Phoenix support near equivalent types, it should be easier for 
> users to pick a Hive table and load it (or derived query results) from it.
> I'm posting this to open design discussion, but also submit my own project 
> https://github.com/randerzander/HiveToPhoenix for consideration as an early 
> solution. It creates a Spark DataFrame from a Hive query, uses Phoenix JDBC 
> to "create if not exists" a Phoenix equivalent table, and uses the 
> phoenix-spark artifact to store the DataFrame into Phoenix.
> I'm eager to get feedback if this is interesting/useful to the Phoenix 
> community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2634) Dynamic service discovery for QueryServer

2016-01-27 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120495#comment-15120495
 ] 

Josh Elser commented on PHOENIX-2634:
-

Hi [~warwithin]. This would be neat to play with some more.

I've experimented with putting multiple PQS instances behind a "dumb" load 
balancer (haproxy, specifically) with success. This has some edge cases (which 
I've talked with [~jamestaylor] about somewhere previously), notable 
automatically resuming failed queries (assuming a static dataset). These are 
the same sorts of problems you'd have to address to implement something like 
pagination/cursor.

I've also added a [new 
attribute|http://calcite.apache.org/docs/avatica_protobuf_reference.html#rpcmetadata]
 that is returned by PQS at the wire-level for every request. This would let 
you implement your own client-routing decisions so that you could have full 
control over how a client "routes" its requests. This is just a hammer though, 
not a house.

When you start getting into load balancing and HA, service discovery also 
become an important piece (how do your clients actually find *where* your 
service is). YARN-913 introduce a "registry" which currently has a 
ZooKeeper-backed solution for service discovery. I believe there is some work 
on a DNS frontend for this, but I'm not sure the state of it or where it's 
being tracked. There are many other systems out there which could be leveraged 
for this aspect.

So, this is a long-winded way to say: what do you think should actually be 
done? PQS is designed to scale horizontally alreardy (as its REST-iness would 
imply), so what do you think the next step would be? Personally, I think trying 
to improve the edges in running behind a "dumb" loadbalancer and then look into 
recommendations on how DNS could be put in front of that.

Clients can then use a single name to refer to some "farm" of PQS instances, 
with the load balancer handling the routing logic. This would provide HA, 
service discovery and load balancing.

One of these days, I'll also try to write up some goodness to deploy PQS on top 
of Apache Slider to get some auto-magic scaling across a YARN instance. Not 
sure if my long-term vision would hinge on Slider or just be a deployment 
option.

> Dynamic service discovery for QueryServer
> -
>
> Key: PHOENIX-2634
> URL: https://issues.apache.org/jira/browse/PHOENIX-2634
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: YoungWoo Kim
>
> It would be nice if Phoenix QueryServer supports a feature like HIVE-7935 for 
> HA and load balancing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2636) Figure out a work around for java.lang.NoSuchFieldError: in when compiling against HBase < 0.98.17

2016-01-27 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120492#comment-15120492
 ] 

Samarth Jain commented on PHOENIX-2636:
---

[~mujtabachohan] verified that this issue happens when compiling with 0.98.16.1 
and running against 0.98.17. He also verified that compiling and running 
against 0.98.17 works fine. Maybe an easier work around is to just upgrade our 
pom to 0.98.17. But then this doesn't prevent pain for folks who are managing 
their own forked phoenix versions that compile against 0.98.16 version of 
HBase. FWIW, we at Salesforce are on 0.98.16.1 version of HBase and this will 
essentially stop us from upgrading to 0.98.17 on the server side.

> Figure out a work around for java.lang.NoSuchFieldError: in when compiling 
> against HBase < 0.98.17
> --
>
> Key: PHOENIX-2636
> URL: https://issues.apache.org/jira/browse/PHOENIX-2636
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Critical
>
> Working on PHOENIX-2629 revealed that when compiling against an HBase version 
> prior to 0.98.17 and running against 0.98.17, region assignments fails to 
> complete because of the error:
> {code}
> java.lang.NoSuchFieldError: in
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$PhoenixBaseDecoder.(IndexedWALEditCodec.java:111)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueDecoder.(IndexedWALEditCodec.java:126)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:68)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:253)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2636) Figure out a work around for java.lang.NoSuchFieldError: in when compiling against HBase < 0.98.17

2016-01-27 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120456#comment-15120456
 ] 

James Taylor commented on PHOENIX-2636:
---

[~apurtell] - any ideas?

> Figure out a work around for java.lang.NoSuchFieldError: in when compiling 
> against HBase < 0.98.17
> --
>
> Key: PHOENIX-2636
> URL: https://issues.apache.org/jira/browse/PHOENIX-2636
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Critical
>
> Working on PHOENIX-2629 revealed that when compiling against an HBase version 
> prior to 0.98.17 and running against 0.98.17, region assignments fails to 
> complete because of the error:
> {code}
> java.lang.NoSuchFieldError: in
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$PhoenixBaseDecoder.(IndexedWALEditCodec.java:111)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueDecoder.(IndexedWALEditCodec.java:126)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:68)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:253)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2338) Couple of little tweaks in "Phoenix in 15 minutes or less"

2016-01-27 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120438#comment-15120438
 ] 

Thomas D'Silva commented on PHOENIX-2338:
-

Looks, good I committed this.

> Couple of little tweaks in "Phoenix in 15 minutes or less"
> --
>
> Key: PHOENIX-2338
> URL: https://issues.apache.org/jira/browse/PHOENIX-2338
> Project: Phoenix
>  Issue Type: Bug
> Environment: On the website.
>Reporter: James Stanier
>Assignee: Thomas D'Silva
>Priority: Trivial
>  Labels: documentation
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2338.patch
>
>
> There's a couple of little things I'd like to fix in the "Phoenix in 15 
> minutes or less" page, based on my experience of running through the 
> instructions myself. Just wanted to register them before I put in a patch...
> 1. When copying and pasting the us_population.sql queries, the 
> Microsoft-style smart quotes lead to parsing errors when running with 
> psql.py: 
> {code}
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Unexpected char: '“'
>   at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
>   at org.apache.phoenix.parse.SQLParser.nextStatement(SQLParser.java:98)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.nextStatement(PhoenixStatement.java:1278)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.(PhoenixPreparedStatement.java:84)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:312)
>   at 
> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:277)
>   at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:222)
> Caused by: java.lang.RuntimeException: Unexpected char: '“'
>   at 
> org.apache.phoenix.parse.PhoenixSQLLexer.mOTHER(PhoenixSQLLexer.java:4169)
>   at 
> org.apache.phoenix.parse.PhoenixSQLLexer.mTokens(PhoenixSQLLexer.java:5226)
>   at org.antlr.runtime.Lexer.nextToken(Lexer.java:85)
>   at 
> org.antlr.runtime.BufferedTokenStream.fetch(BufferedTokenStream.java:143)
>   at 
> org.antlr.runtime.BufferedTokenStream.sync(BufferedTokenStream.java:137)
>   at 
> org.antlr.runtime.CommonTokenStream.consume(CommonTokenStream.java:71)
>   at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:106)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.parseAlias(PhoenixSQLParser.java:6106)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.selectable(PhoenixSQLParser.java:5223)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.select_list(PhoenixSQLParser.java:5050)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.single_select(PhoenixSQLParser.java:4315)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.unioned_selects(PhoenixSQLParser.java:4432)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.select_node(PhoenixSQLParser.java:4497)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:765)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.nextStatement(PhoenixSQLParser.java:450)
>   at org.apache.phoenix.parse.SQLParser.nextStatement(SQLParser.java:88)
>   ... 5 more
> {code}
> 2. Similarly the CSV data provided does not have line breaks after each line, 
> which when copy and pasting it gives an error:  
> {code}
> 15/10/20 10:50:45 ERROR util.CSVCommonsLoader: Error upserting record [NY, 
> New York, 8143197 CA, Los Angeles, 3844829 IL, Chicago, 2842518 TX, Houston, 
> 2016582 PA, Philadelphia, 1463281 AZ, Phoenix, 1461575 TX, San Antonio, 
> 1256509 CA, San Diego, 1255540 TX, Dallas, 1213825 CA, San Jose, 912332 ]: 
> java.sql.SQLException: ERROR 201 (22000): Illegal data.
> {code}
> 3. Just for clarity, I'd like to change the bullet-point "copy the phoenix 
> jar into the HBase lib directory of every region server" to "copy the phoenix 
> /server/ jar into the HBase lib directory of every region server"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2636) Figure out a work around for java.lang.NoSuchFieldError: in when compiling against HBase < 0.98.17

2016-01-27 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-2636:
-

 Summary: Figure out a work around for java.lang.NoSuchFieldError: 
in when compiling against HBase < 0.98.17
 Key: PHOENIX-2636
 URL: https://issues.apache.org/jira/browse/PHOENIX-2636
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain
Priority: Critical


Working on PHOENIX-2629 revealed that when compiling against an HBase version 
prior to 0.98.17 and running against 0.98.17, region assignments fails to 
complete because of the error:

{code}
java.lang.NoSuchFieldError: in
at 
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$PhoenixBaseDecoder.(IndexedWALEditCodec.java:111)
at 
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueDecoder.(IndexedWALEditCodec.java:126)
at 
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:68)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:253)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
at 
org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
at 
org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2629) NoClassDef error for BaseDecoder$PBIS on log replay

2016-01-27 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-2629:
--
Attachment: PHOENIX-2629_v3.patch

Thanks for the testing, [~mujtabachohan]. Attached patch fixes the binary 
incompatibility issue when compiling phoenix (with 0.98.16 hbase) and running 
against 0.98.13. 

Mujtaba did find an issue though when compiling with 0.98.16 (with and without 
my patch) and running against 0.98.17.

{code}
java.lang.NoSuchFieldError: in
at 
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$PhoenixBaseDecoder.(IndexedWALEditCodec.java:111)
at 
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueDecoder.(IndexedWALEditCodec.java:202)
at 
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:68)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:253)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
at 
org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
at 
org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

Compiling with 0.98.17 and running against 0.98.17 did work fine. 
 
It looks like [~enis] did attempt a fix in HBASE-14904 but it doesn't seem like 
it worked. I will file a separate JIRA to track this. 

[~jamestaylor], please review.

> NoClassDef error for BaseDecoder$PBIS on log replay
> ---
>
> Key: PHOENIX-2629
> URL: https://issues.apache.org/jira/browse/PHOENIX-2629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Attachments: PHOENIX-2629.patch, PHOENIX-2629_v2.patch, 
> PHOENIX-2629_v3.patch
>
>
> HBase version 0.98.13 with Phoenix 4.7.0-RC0
> {code}
> executor.EventHandler - Caught throwable while processing event RS_LOG_REPLAY
> java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/codec/BaseDecoder$PBIS
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:63)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:254)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.codec.BaseDecoder$PBIS
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>  

[jira] [Commented] (PHOENIX-2632) Easier Hive->Phoenix data movement

2016-01-27 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120348#comment-15120348
 ] 

Josh Elser commented on PHOENIX-2632:
-

[~rgelhau], how do you see something like this getting included into Phoenix? A 
new Maven module that can sit downstream of phoenix-spark, maybe produce some 
uber-jar to make classpath stuff easier from the Phoenix side?

Any possibility to add some end-to-end tests? Such tests would be nice to have 
to help catch future breakages as they happen instead of realizing after a 
release when someone goes to use it.

In general, any tooling that can help get your data into Phoenix seems like a 
valuable addition to me. There are many ways to hammer that nail, but this 
seems like it would be a reasonably general-purpose one to provide.

> Easier Hive->Phoenix data movement
> --
>
> Key: PHOENIX-2632
> URL: https://issues.apache.org/jira/browse/PHOENIX-2632
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Randy Gelhausen
>
> Moving tables or query results from Hive into Phoenix today requires error 
> prone manual schema re-definition inside HBase storage handler properties. 
> Since Hive and Phoenix support near equivalent types, it should be easier for 
> users to pick a Hive table and load it (or derived query results) from it.
> I'm posting this to open design discussion, but also submit my own project 
> https://github.com/randerzander/HiveToPhoenix for consideration as an early 
> solution. It creates a Spark DataFrame from a Hive query, uses Phoenix JDBC 
> to "create if not exists" a Phoenix equivalent table, and uses the 
> phoenix-spark artifact to store the DataFrame into Phoenix.
> I'm eager to get feedback if this is interesting/useful to the Phoenix 
> community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2635) Partial index rebuild doesn't delete prior index row

2016-01-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2635:
--
Description: 
The partial rebuild index feature for mutable secondary indexes does not do the 
correct index maintenance. We currently only insert the new index rows based on 
the current data row values which would not correctly remove the previous index 
row (thus leading to an invalid index). Instead, we should replay the data row 
mutations so that the coprocessors generate the correct deletes and updates.

Also, instead of *every* region running the partial index rebuild, we should 
have each region only replay their own data mutations so that we're not 
duplicating work.

A third (and perhaps most serious) issue is that the partial index rebuild 
could trigger the upgrade code before a cluster is ready to be upgraded. We'll 
definitely want to prevent that.

  was:
The partial rebuild index feature for mutable secondary indexes does not do the 
correct index maintenance. We currently only insert the new index rows based on 
the current data row values which would not correctly remove the previous index 
row (thus leading to an invalid index). Instead, we should replay the data row 
mutations so that the coprocessors generate the correct deletes and updates.

Also, instead of *every* region running the partial index rebuild, we should 
have each region only replay their own data mutations so that we're not 
duplicating work.


> Partial index rebuild doesn't delete prior index row
> 
>
> Key: PHOENIX-2635
> URL: https://issues.apache.org/jira/browse/PHOENIX-2635
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> The partial rebuild index feature for mutable secondary indexes does not do 
> the correct index maintenance. We currently only insert the new index rows 
> based on the current data row values which would not correctly remove the 
> previous index row (thus leading to an invalid index). Instead, we should 
> replay the data row mutations so that the coprocessors generate the correct 
> deletes and updates.
> Also, instead of *every* region running the partial index rebuild, we should 
> have each region only replay their own data mutations so that we're not 
> duplicating work.
> A third (and perhaps most serious) issue is that the partial index rebuild 
> could trigger the upgrade code before a cluster is ready to be upgraded. 
> We'll definitely want to prevent that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2221) Option to make data regions not writable when index regions are not available

2016-01-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2221:
--
Attachment: PHOENIX-2221-v7.patch

Working patch. Please review, [~samarthjain]. There are some issues with the 
way we do the partial index rebuild independent of this patch that need to be 
addressed (PHOENIX-2635). I'll do that next.

> Option to make data regions not writable when index regions are not available
> -
>
> Key: PHOENIX-2221
> URL: https://issues.apache.org/jira/browse/PHOENIX-2221
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Devaraj Das
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2221-v1.patch, PHOENIX-2221-v2.patch, 
> PHOENIX-2221-v3.patch, PHOENIX-2221-v4.patch, PHOENIX-2221-v5.patch, 
> PHOENIX-2221-v6.patch, PHOENIX-2221-v7.patch
>
>
> In one usecase, it was deemed better to not accept writes when the index 
> regions are unavailable for any reason (as opposed to disabling the index and 
> the queries doing bigger data-table scans).
> The idea is that the index regions are kept consistent with the data regions, 
> and when a query runs against the index regions, one can be reasonably sure 
> that the query ran with the most recent data in the data regions. When the 
> index regions are unavailable, the writes to the data table are rejected. 
> Read queries off of the index regions would have deterministic performance 
> (and on the other hand if the index is disabled, then the read queries would 
> have to go to the data regions until the indexes are rebuilt, and the queries 
> would suffer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2221) Option to make data regions not writable when index regions are not available

2016-01-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2221:
--
Attachment: (was: PHOENIX-2221.wip)

> Option to make data regions not writable when index regions are not available
> -
>
> Key: PHOENIX-2221
> URL: https://issues.apache.org/jira/browse/PHOENIX-2221
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Devaraj Das
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2221-v1.patch, PHOENIX-2221-v2.patch, 
> PHOENIX-2221-v3.patch, PHOENIX-2221-v4.patch, PHOENIX-2221-v5.patch, 
> PHOENIX-2221-v6.patch
>
>
> In one usecase, it was deemed better to not accept writes when the index 
> regions are unavailable for any reason (as opposed to disabling the index and 
> the queries doing bigger data-table scans).
> The idea is that the index regions are kept consistent with the data regions, 
> and when a query runs against the index regions, one can be reasonably sure 
> that the query ran with the most recent data in the data regions. When the 
> index regions are unavailable, the writes to the data table are rejected. 
> Read queries off of the index regions would have deterministic performance 
> (and on the other hand if the index is disabled, then the read queries would 
> have to go to the data regions until the indexes are rebuilt, and the queries 
> would suffer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2221) Option to make data regions not writable when index regions are not available

2016-01-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2221:
--
Attachment: (was: PHOENIX-2221_v9.patch)

> Option to make data regions not writable when index regions are not available
> -
>
> Key: PHOENIX-2221
> URL: https://issues.apache.org/jira/browse/PHOENIX-2221
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Devaraj Das
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2221-v1.patch, PHOENIX-2221-v2.patch, 
> PHOENIX-2221-v3.patch, PHOENIX-2221-v4.patch, PHOENIX-2221-v5.patch, 
> PHOENIX-2221-v6.patch
>
>
> In one usecase, it was deemed better to not accept writes when the index 
> regions are unavailable for any reason (as opposed to disabling the index and 
> the queries doing bigger data-table scans).
> The idea is that the index regions are kept consistent with the data regions, 
> and when a query runs against the index regions, one can be reasonably sure 
> that the query ran with the most recent data in the data regions. When the 
> index regions are unavailable, the writes to the data table are rejected. 
> Read queries off of the index regions would have deterministic performance 
> (and on the other hand if the index is disabled, then the read queries would 
> have to go to the data regions until the indexes are rebuilt, and the queries 
> would suffer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2221) Option to make data regions not writable when index regions are not available

2016-01-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2221:
--
Attachment: (was: PHOENIX-2221_v7.patch)

> Option to make data regions not writable when index regions are not available
> -
>
> Key: PHOENIX-2221
> URL: https://issues.apache.org/jira/browse/PHOENIX-2221
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Devaraj Das
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2221-v1.patch, PHOENIX-2221-v2.patch, 
> PHOENIX-2221-v3.patch, PHOENIX-2221-v4.patch, PHOENIX-2221-v5.patch, 
> PHOENIX-2221-v6.patch
>
>
> In one usecase, it was deemed better to not accept writes when the index 
> regions are unavailable for any reason (as opposed to disabling the index and 
> the queries doing bigger data-table scans).
> The idea is that the index regions are kept consistent with the data regions, 
> and when a query runs against the index regions, one can be reasonably sure 
> that the query ran with the most recent data in the data regions. When the 
> index regions are unavailable, the writes to the data table are rejected. 
> Read queries off of the index regions would have deterministic performance 
> (and on the other hand if the index is disabled, then the read queries would 
> have to go to the data regions until the indexes are rebuilt, and the queries 
> would suffer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2221) Option to make data regions not writable when index regions are not available

2016-01-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2221:
--
Attachment: (was: PHOENIX-2221_v8.patch)

> Option to make data regions not writable when index regions are not available
> -
>
> Key: PHOENIX-2221
> URL: https://issues.apache.org/jira/browse/PHOENIX-2221
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Devaraj Das
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2221-v1.patch, PHOENIX-2221-v2.patch, 
> PHOENIX-2221-v3.patch, PHOENIX-2221-v4.patch, PHOENIX-2221-v5.patch, 
> PHOENIX-2221-v6.patch
>
>
> In one usecase, it was deemed better to not accept writes when the index 
> regions are unavailable for any reason (as opposed to disabling the index and 
> the queries doing bigger data-table scans).
> The idea is that the index regions are kept consistent with the data regions, 
> and when a query runs against the index regions, one can be reasonably sure 
> that the query ran with the most recent data in the data regions. When the 
> index regions are unavailable, the writes to the data table are rejected. 
> Read queries off of the index regions would have deterministic performance 
> (and on the other hand if the index is disabled, then the read queries would 
> have to go to the data regions until the indexes are rebuilt, and the queries 
> would suffer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2221) Option to make data regions not writable when index regions are not available

2016-01-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2221:
--
Attachment: (was: PHOENIX-2221.patch)

> Option to make data regions not writable when index regions are not available
> -
>
> Key: PHOENIX-2221
> URL: https://issues.apache.org/jira/browse/PHOENIX-2221
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Devaraj Das
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2221-v1.patch, PHOENIX-2221-v2.patch, 
> PHOENIX-2221-v3.patch, PHOENIX-2221-v4.patch, PHOENIX-2221-v5.patch, 
> PHOENIX-2221-v6.patch
>
>
> In one usecase, it was deemed better to not accept writes when the index 
> regions are unavailable for any reason (as opposed to disabling the index and 
> the queries doing bigger data-table scans).
> The idea is that the index regions are kept consistent with the data regions, 
> and when a query runs against the index regions, one can be reasonably sure 
> that the query ran with the most recent data in the data regions. When the 
> index regions are unavailable, the writes to the data table are rejected. 
> Read queries off of the index regions would have deterministic performance 
> (and on the other hand if the index is disabled, then the read queries would 
> have to go to the data regions until the indexes are rebuilt, and the queries 
> would suffer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2221) Option to make data regions not writable when index regions are not available

2016-01-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2221:
--
Attachment: (was: DelegateIndexFailurePolicy.java)

> Option to make data regions not writable when index regions are not available
> -
>
> Key: PHOENIX-2221
> URL: https://issues.apache.org/jira/browse/PHOENIX-2221
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Devaraj Das
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2221-v1.patch, PHOENIX-2221-v2.patch, 
> PHOENIX-2221-v3.patch, PHOENIX-2221-v4.patch, PHOENIX-2221-v5.patch, 
> PHOENIX-2221-v6.patch, PHOENIX-2221.patch, PHOENIX-2221.wip, 
> PHOENIX-2221_v7.patch, PHOENIX-2221_v8.patch, PHOENIX-2221_v9.patch
>
>
> In one usecase, it was deemed better to not accept writes when the index 
> regions are unavailable for any reason (as opposed to disabling the index and 
> the queries doing bigger data-table scans).
> The idea is that the index regions are kept consistent with the data regions, 
> and when a query runs against the index regions, one can be reasonably sure 
> that the query ran with the most recent data in the data regions. When the 
> index regions are unavailable, the writes to the data table are rejected. 
> Read queries off of the index regions would have deterministic performance 
> (and on the other hand if the index is disabled, then the read queries would 
> have to go to the data regions until the indexes are rebuilt, and the queries 
> would suffer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2635) Partial index rebuild doesn't delete prior index row

2016-01-27 Thread James Taylor (JIRA)
James Taylor created PHOENIX-2635:
-

 Summary: Partial index rebuild doesn't delete prior index row
 Key: PHOENIX-2635
 URL: https://issues.apache.org/jira/browse/PHOENIX-2635
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


The partial rebuild index feature for mutable secondary indexes does not do the 
correct index maintenance. We currently only insert the new index rows based on 
the current data row values which would not correctly remove the previous index 
row (thus leading to an invalid index). Instead, we should replay the data row 
mutations so that the coprocessors generate the correct deletes and updates.

Also, instead of *every* region running the partial index rebuild, we should 
have each region only replay their own data mutations so that we're not 
duplicating work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2629) NoClassDef error for BaseDecoder$PBIS on log replay

2016-01-27 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120126#comment-15120126
 ] 

Mujtaba Chohan commented on PHOENIX-2629:
-

[~samarthjain] with patch:

{code}
java.lang.IllegalStateException: Make sure to call init(hbaseVersion) before 
calling getDecoder()
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:149)
at 
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:76)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:254)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
{code}

> NoClassDef error for BaseDecoder$PBIS on log replay
> ---
>
> Key: PHOENIX-2629
> URL: https://issues.apache.org/jira/browse/PHOENIX-2629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Attachments: PHOENIX-2629.patch, PHOENIX-2629_v2.patch
>
>
> HBase version 0.98.13 with Phoenix 4.7.0-RC0
> {code}
> executor.EventHandler - Caught throwable while processing event RS_LOG_REPLAY
> java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/codec/BaseDecoder$PBIS
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:63)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:254)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.codec.BaseDecoder$PBIS
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 15 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2629) NoClassDef error for BaseDecoder$PBIS on log replay

2016-01-27 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-2629:
--
Attachment: PHOENIX-2629_v2.patch

Slightly better way of handling the init.

[~mujtabachohan] - could you try this patch and see if the error is gone? 

> NoClassDef error for BaseDecoder$PBIS on log replay
> ---
>
> Key: PHOENIX-2629
> URL: https://issues.apache.org/jira/browse/PHOENIX-2629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Attachments: PHOENIX-2629.patch, PHOENIX-2629_v2.patch
>
>
> HBase version 0.98.13 with Phoenix 4.7.0-RC0
> {code}
> executor.EventHandler - Caught throwable while processing event RS_LOG_REPLAY
> java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/codec/BaseDecoder$PBIS
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:63)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:254)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.codec.BaseDecoder$PBIS
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 15 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2629) NoClassDef error for BaseDecoder$PBIS on log replay

2016-01-27 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-2629:
--
Attachment: PHOENIX-2629.patch

Tentative patch. Not happy with the way I am currently getting hold of the 
hbase version in IndexManagementUtil and passing on to the IndexedWALEditCodec 
class. Would be ideal if the configuration object itself has the hbase version 
available.

[~jamestaylor] - please review.

> NoClassDef error for BaseDecoder$PBIS on log replay
> ---
>
> Key: PHOENIX-2629
> URL: https://issues.apache.org/jira/browse/PHOENIX-2629
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Attachments: PHOENIX-2629.patch
>
>
> HBase version 0.98.13 with Phoenix 4.7.0-RC0
> {code}
> executor.EventHandler - Caught throwable while processing event RS_LOG_REPLAY
> java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/codec/BaseDecoder$PBIS
>   at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:63)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:254)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.codec.BaseDecoder$PBIS
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 15 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-2542) CSV bulk loading with --schema option is broken

2016-01-27 Thread maghamravikiran (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maghamravikiran reassigned PHOENIX-2542:


Assignee: maghamravikiran

> CSV bulk loading with --schema option is broken
> ---
>
> Key: PHOENIX-2542
> URL: https://issues.apache.org/jira/browse/PHOENIX-2542
> Project: Phoenix
>  Issue Type: Bug
> Environment: Current master branch / HBase 1.1.2
>Reporter: YoungWoo Kim
>Assignee: maghamravikiran
>
> My bulk load command looks like this:
> {code}
> HADOOP_CLASSPATH=/usr/lib/hbase/hbase-protocol.jar:/etc/hbase/conf/ hadoop 
> jar /usr/lib/phoenix/phoenix-client.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool ${HADOOP_MR_RUNTIME_OPTS} 
> --schema MYSCHEMA --table MYTABLE --input /path/to/id=2015121800/* -d 
> $'\001'
> {code}
> Got errors as following:
> {noformat}
> 15/12/21 11:47:40 INFO mapreduce.Job: Task Id : 
> attempt_1450018293185_0952_m_04_2, Status : FAILED
> Error: java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=MYTABLE
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:170)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:61)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=MYTABLE
>   at com.google.common.base.Throwables.propagate(Throwables.java:156)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper$MapperUpsertListener.errorOnRecord(FormatToKeyValueMapper.java:246)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:92)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:44)
>   at 
> org.apache.phoenix.util.UpsertExecutor.execute(UpsertExecutor.java:133)
>   at 
> org.apache.phoenix.mapreduce.FormatToKeyValueMapper.map(FormatToKeyValueMapper.java:147)
>   ... 9 more
> Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
> (42M03): Table undefined. tableName=MYTABLE
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:436)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:285)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:249)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:289)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:578)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:566)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:245)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:84)
>   ... 12 more
> {noformat}
> My table MYSCHEMA.MYTABLE exists but bulk load tool does not recognize my 
> schema name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2169) Illegal data error on UPSERT SELECT and JOIN with salted tables

2016-01-27 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15119294#comment-15119294
 ] 

Ankit Singhal commented on PHOENIX-2169:


Yes [~jamestaylor],  it is still an issue with 4.7.
let me check if I can fix this.


> Illegal data error on UPSERT SELECT and JOIN with salted tables
> ---
>
> Key: PHOENIX-2169
> URL: https://issues.apache.org/jira/browse/PHOENIX-2169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
>Reporter: Josh Mahonin
>Assignee: Ankit Singhal
>  Labels: verify
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2169-bug.patch
>
>
> I have an issue where I get periodic failures (~50%) for an UPSERT SELECT 
> query involving a JOIN on salted tables. Unfortunately I haven't been able to 
> create a reproducible test case yet, though I'll keep trying. I believe this 
> same behaviour existed in 4.3.1 as well, so I don't think it's a regression.
> The upsert query itself looks something like this:
> {code}
> UPSERT INTO a(tid, ds, etp, eid, ts, atp, rel, tp, tpid, dt, pro) 
> SELECT c.tid, 
>c.ds, 
>c.etp, 
>c.eid, 
>c.dh, 
>0, 
>c.rel, 
>c.tp, 
>c.tpid, 
>current_time(), 
>1.0 / s.th 
> FROM   e_c c 
> join   e_s s 
> ON s.tid = c.tid 
> ANDs.ds = c.ds 
> ANDs.etp = c.etp 
> ANDs.eid = c.eid 
> WHERE  c.tid = 'FOO';
> {code}
> Without the upsert, the query always returns the right data, but with the 
> upsert, it ends up with failures like:
> Error: ERROR 201 (22000): Illegal data. ERROR 201 (22000): Illegal data. 
> Expected length of at least 109 bytes, but had 19 (state=22000,code=201)
> The explain plan looks like:
> {code}
> UPSERT SELECT
> CLIENT 16-CHUNK PARALLEL 16-WAY RANGE SCAN OVER E_C [0,'FOO']
>   SERVER FILTER BY FIRST KEY ONLY
>   PARALLEL INNER-JOIN TABLE 0
>   CLIENT 16-CHUNK PARALLEL 16-WAY FULL SCAN OVER E_S
>   DYNAMIC SERVER FILTER BY (C.TID, C.DS, C.ETP, C.EID) IN ((S.TID, S.DS, 
> S.ETP, S.EID))
> {code}
> I'm using SALT_BUCKETS=16 for both tables in the join, and this is a dev 
> environment, so only 1 region server. Note that without salted tables, I have 
> no issue with this query.
> The number of rows in E_C is around 23K, and the number of rows in E_S is 62.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2634) Dynamic service discovery for QueryServer

2016-01-27 Thread YoungWoo Kim (JIRA)
YoungWoo Kim created PHOENIX-2634:
-

 Summary: Dynamic service discovery for QueryServer
 Key: PHOENIX-2634
 URL: https://issues.apache.org/jira/browse/PHOENIX-2634
 Project: Phoenix
  Issue Type: New Feature
Reporter: YoungWoo Kim


It would be nice if Phoenix QueryServer supports a feature like HIVE-7935 for 
HA and load balancing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Announcing phoenix-for-cloudera 4.6.0

2016-01-27 Thread Kumar Palaniappan
Andrew, any updates? Seem HBase-11544 impacted the Phoenix and CDH 5.5.1
isnt working.

On Sun, Jan 17, 2016 at 11:25 AM, Andrew Purtell 
wrote:

> This looks like something easy to fix up. Maybe I can get to it next week.
>
> > On Jan 15, 2016, at 9:07 PM, Krishna  wrote:
> >
> > On the branch:  4.5-HBase-1.0-cdh5, I set cdh version to 5.5.1 in pom and
> > building the package produces following errors.
> > Repo: https://github.com/chiastic-security/phoenix-for-cloudera
> >
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/util/Tracing.java:[176,82]
> > cannot find symbol
> > [ERROR] symbol:   method getParentId()
> > [ERROR] location: variable span of type org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[129,31]
> > cannot find symbol
> > [ERROR] symbol:   variable ROOT_SPAN_ID
> > [ERROR] location: interface org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[159,38]
> > cannot find symbol
> > [ERROR] symbol:   variable ROOT_SPAN_ID
> > [ERROR] location: interface org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[162,31]
> > cannot find symbol
> > [ERROR] symbol:   variable ROOT_SPAN_ID
> > [ERROR] location: interface org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[337,38]
> > cannot find symbol
> > [ERROR] symbol:   variable ROOT_SPAN_ID
> > [ERROR] location: interface org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[339,42]
> > cannot find symbol
> > [ERROR] symbol:   variable ROOT_SPAN_ID
> > [ERROR] location: interface org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[359,58]
> > cannot find symbol
> > [ERROR] symbol:   variable ROOT_SPAN_ID
> > [ERROR] location: interface org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceMetricSource.java:[99,74]
> > cannot find symbol
> > [ERROR] symbol:   method getParentId()
> > [ERROR] location: variable span of type org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceMetricSource.java:[110,60]
> > incompatible types
> > [ERROR] required: java.util.Map
> > [ERROR] found:java.util.Map
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java:[550,57]
> >  > org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$1> is not
> > abstract and does not override abstract method
> >
> nextRaw(java.util.List,org.apache.hadoop.hbase.regionserver.ScannerContext)
> > in org.apache.hadoop.hbase.regionserver.RegionScanner
> >
> >
> >> On Fri, Jan 15, 2016 at 6:20 PM, Krishna  wrote:
> >>
> >> Thanks Andrew. Are binaries available for CDH5.5.x?
> >>
> >> On Tue, Nov 3, 2015 at 9:10 AM, Andrew Purtell 
> >> wrote:
> >>
> >>> Today I pushed a new branch '4.6-HBase-1.0-cdh5' and the tag
> >>> 'v4.6.0-cdh5.4.5' (58fcfa6) to
> >>> https://github.com/chiastic-security/phoenix-for-cloudera. This is the
> >>> Phoenix 4.6.0 release, modified to build against CDH 5.4.5 and possibly
> >>> (but not tested) subsequent CDH releases.
> >>>
> >>> If you want release tarballs I built from this, get them here:
> >>>
> >>> Binaries
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-bin.tar.gz
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-bin.tar.gz.asc
> >>> (signature)
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-bin.tar.gz.md5
> >>> (MD5 sum)
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-bin.tar.gz.sha
> >>> (SHA-1 sum)
> >>>
> >>>
> >>> Source
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-src.tar.gz
> >>>
> >>>
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-src.tar.gz.asc
> >>> (signature)
> >>>
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-src.tar.gz.md5
> >>> (MD5 sum)
> >>>
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-src.tar.gz.sha
> >>> (SHA1-sum)
> >>>
> >>>
> >>> Signed with my code signing key D5365CCD.
> >>>
> >>> ​The source and these binaries incorporate changes from the Cloudera
> Labs
> >>> fork of Phoenix (https://github.com/cloudera-labs/phoenix), licensed
> >>> under the ASL v2, Neither the source or binary

[jira] [Created] (PHOENIX-2633) Support tables without primary key

2016-01-27 Thread YoungWoo Kim (JIRA)
YoungWoo Kim created PHOENIX-2633:
-

 Summary: Support tables without primary key
 Key: PHOENIX-2633
 URL: https://issues.apache.org/jira/browse/PHOENIX-2633
 Project: Phoenix
  Issue Type: New Feature
Reporter: YoungWoo Kim


Would be useful to have a table without primary key. As of now, Primary key for 
Phoenix tables is mandatory but sometimes users need a table without primary 
key constraint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release of Apache Phoenix 4.7.0-HBase-1.1 RC0

2016-01-27 Thread YoungWoo Kim
With RC0, I can reproduce an issue, PHOENIX-2542

Thanks,
Youngwoo

On Sat, Jan 23, 2016 at 6:16 PM, James Taylor 
wrote:

> Hello Everyone,
>
> This is a call for a vote on Apache Phoenix 4.7.0-HBase-1.1 RC0. This is
> the next minor release of Phoenix 4, compatible with Apache HBase 1.1+. The
> release includes both a source-only release and a convenience binary
> release.
>
> This release has feature parity with our other pending 4.7.0 releases and
> includes the following improvements:
> - ACID transaction support (beta) [1]
> - Statistics improvements [2]
> - Performance improvements [3][4][5]
> - 150+ bug fixes
>
> The source tarball, including signatures, digests, etc can be found at:
>
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.7.0-HBase-1.1-rc0/src/
>
> The binary artifacts can be found at:
>
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.7.0-HBase-1.1-rc0/bin/
>
> For a complete list of changes, see:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120&version=12333998
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/mujtaba.asc
>
> KEYS file available here:
> https://dist.apache.org/repos/dist/release/phoenix/KEYS
>
> The hash and tag to be voted upon:
>
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=551cc7db93a8a2c3cc9ff15e7cf9425e311ab125
>
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.7.0-HBase-1.1-rc0
>
> Vote will be open until at least, Wed, Jan 27th @ 5pm PST. Please vote:
>
> [ ] +1 approve
> [ ] +0 no opinion
> [ ] -1 disapprove (and reason why)
>
> Thanks,
> The Apache Phoenix Team
>
> [1] https://phoenix.apache.org/transactions.html
> [2] https://issues.apache.org/jira/browse/PHOENIX-2430
> [3] https://issues.apache.org/jira/browse/PHOENIX-1428
> [4] https://issues.apache.org/jira/browse/PHOENIX-2377
> [5] https://issues.apache.org/jira/browse/PHOENIX-2520
>