[jira] [Updated] (PHOENIX-5164) MutableIndexIT is failing in CDH6 brach

2019-02-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5164:
-
Comment: was deleted

(was: After PHOENIX-4956 this test started failing)

> MutableIndexIT is failing in CDH6 brach
> ---
>
> Key: PHOENIX-5164
> URL: https://issues.apache.org/jira/browse/PHOENIX-5164
> Project: Phoenix
>  Issue Type: Task
>Reporter: Pedro Boado
>Priority: Major
> Fix For: 5.1.0-cdh
>
>
> Tests are failing under MutableIndexIT_localIndex= * 
> ,transactionProvider=null,columnEncoded= *



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5164) MutableIndexIT is failing in CDH6 brach

2019-02-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5164:
-
Description: 
testUpdateNonIndexedColumn usecase is failing under MutableIndexIT_localIndex= 
* ,transactionProvider=null,columnEncoded= *

{code:java}
java.lang.AssertionError: Expected equality for V2, but 'v2_2'!=null
(...)
at 
org.apache.phoenix.util.IndexScrutiny.scrutinizeIndex(IndexScrutiny.java:156)
at 
org.apache.phoenix.end2end.index.MutableIndexIT.testUpdateNonIndexedColumn(MutableIndexIT.java:895)
{code}


  was:
Tests are failing under MutableIndexIT_localIndex= * 
,transactionProvider=null,columnEncoded= *




> MutableIndexIT is failing in CDH6 brach
> ---
>
> Key: PHOENIX-5164
> URL: https://issues.apache.org/jira/browse/PHOENIX-5164
> Project: Phoenix
>  Issue Type: Task
>Reporter: Pedro Boado
>Priority: Major
> Fix For: 5.1.0-cdh
>
>
> testUpdateNonIndexedColumn usecase is failing under 
> MutableIndexIT_localIndex= * ,transactionProvider=null,columnEncoded= *
> {code:java}
> java.lang.AssertionError: Expected equality for V2, but 'v2_2'!=null
> (...)
>   at 
> org.apache.phoenix.util.IndexScrutiny.scrutinizeIndex(IndexScrutiny.java:156)
>   at 
> org.apache.phoenix.end2end.index.MutableIndexIT.testUpdateNonIndexedColumn(MutableIndexIT.java:895)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5164) MutableIndexIT is failing in CDH6 brach

2019-02-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5164:
-
Description: 
{{testUpdateNonIndexedColumn}} usecase is failing under 
{{MutableIndexIT_localIndex= * ,transactionProvider=null,columnEncoded= *}}

{code:java}
java.lang.AssertionError: Expected equality for V2, but 'v2_2'!=null
(...)
at 
org.apache.phoenix.util.IndexScrutiny.scrutinizeIndex(IndexScrutiny.java:156)
at 
org.apache.phoenix.end2end.index.MutableIndexIT.testUpdateNonIndexedColumn(MutableIndexIT.java:895)
{code}


  was:
testUpdateNonIndexedColumn usecase is failing under MutableIndexIT_localIndex= 
* ,transactionProvider=null,columnEncoded= *

{code:java}
java.lang.AssertionError: Expected equality for V2, but 'v2_2'!=null
(...)
at 
org.apache.phoenix.util.IndexScrutiny.scrutinizeIndex(IndexScrutiny.java:156)
at 
org.apache.phoenix.end2end.index.MutableIndexIT.testUpdateNonIndexedColumn(MutableIndexIT.java:895)
{code}



> MutableIndexIT is failing in CDH6 brach
> ---
>
> Key: PHOENIX-5164
> URL: https://issues.apache.org/jira/browse/PHOENIX-5164
> Project: Phoenix
>  Issue Type: Task
>Reporter: Pedro Boado
>Priority: Major
> Fix For: 5.1.0-cdh
>
>
> {{testUpdateNonIndexedColumn}} usecase is failing under 
> {{MutableIndexIT_localIndex= * ,transactionProvider=null,columnEncoded= *}}
> {code:java}
> java.lang.AssertionError: Expected equality for V2, but 'v2_2'!=null
> (...)
>   at 
> org.apache.phoenix.util.IndexScrutiny.scrutinizeIndex(IndexScrutiny.java:156)
>   at 
> org.apache.phoenix.end2end.index.MutableIndexIT.testUpdateNonIndexedColumn(MutableIndexIT.java:895)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5165) DerivedTableIT is failing in CDH6 branch

2019-02-25 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-5165:


 Summary: DerivedTableIT is failing in CDH6 branch
 Key: PHOENIX-5165
 URL: https://issues.apache.org/jira/browse/PHOENIX-5165
 Project: Phoenix
  Issue Type: Task
Reporter: Pedro Boado
 Fix For: 5.1.0-cdh


DerivedTableIT is failing when setting up the minicluster for the test



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5163) MutableIndexSplitForwardScanIT and MutableIndexSplitBackwardScanIT are failing in CDH6 branch

2019-02-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5163:
-
Description: 
Both tests are failing in [localIndex=true, multiTenant=*] test runs


{code:java}
org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
(XCL08): Cache of region boundaries are out of date.

at 
org.apache.phoenix.exception.SQLExceptionCode$14.newException(SQLExceptionCode.java:380)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:183)
at 
org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:167)
at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:134)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.fetchNextBatch(RoundRobinResultIterator.java:255)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:174)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:841)
at 
org.apache.phoenix.end2end.index.MutableIndexSplitIT.splitDuringScan(MutableIndexSplitIT.java:153)
at 
org.apache.phoenix.end2end.index.MutableIndexSplitIT.testSplitDuringIndexScan(MutableIndexSplitIT.java:88)
at 
org.apache.phoenix.end2end.index.MutableIndexSplitForwardScanIT.testSplitDuringIndexScan(MutableIndexSplitForwardScanIT.java:30)
{code}


  was:Both tests are failing in [localIndex=true, multiTenant=*] test runs


> MutableIndexSplitForwardScanIT and MutableIndexSplitBackwardScanIT are 
> failing in CDH6 branch
> -
>
> Key: PHOENIX-5163
> URL: https://issues.apache.org/jira/browse/PHOENIX-5163
> Project: Phoenix
>  Issue Type: Task
>Reporter: Pedro Boado
>Priority: Major
> Fix For: 5.1.0-cdh
>
>
> Both tests are failing in [localIndex=true, multiTenant=*] test runs
> {code:java}
> org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
> (XCL08): Cache of region boundaries are out of date.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$14.newException(SQLExceptionCode.java:380)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:183)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:167)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:134)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.fetchNextBatch(RoundRobinResultIterator.java:255)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:174)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:841)
>   at 
> org.apache.phoenix.end2end.index.MutableIndexSplitIT.splitDuringScan(MutableIndexSplitIT.java:153)
>   at 
> org.apache.phoenix.end2end.index.MutableIndexSplitIT.testSplitDuringIndexScan(MutableIndexSplitIT.java:88)
>   at 
> org.apache.phoenix.end2end.index.MutableIndexSplitForwardScanIT.testSplitDuringIndexScan(MutableIndexSplitForwardScanIT.java:30)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5164) MutableIndexIT is failing in CDH6 brach

2019-02-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5164:
-
Description: 
Tests are failing under MutableIndexIT_localIndex= * 
,transactionProvider=null,columnEncoded= *



  was:
Tests are failing under 
MutableIndexIT_localIndex=*,transactionProvider=null,columnEncoded=*




> MutableIndexIT is failing in CDH6 brach
> ---
>
> Key: PHOENIX-5164
> URL: https://issues.apache.org/jira/browse/PHOENIX-5164
> Project: Phoenix
>  Issue Type: Task
>Reporter: Pedro Boado
>Priority: Major
> Fix For: 5.1.0-cdh
>
>
> Tests are failing under MutableIndexIT_localIndex= * 
> ,transactionProvider=null,columnEncoded= *



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5164) MutableIndexIT is failing in CDH6 brach

2019-02-25 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-5164:


 Summary: MutableIndexIT is failing in CDH6 brach
 Key: PHOENIX-5164
 URL: https://issues.apache.org/jira/browse/PHOENIX-5164
 Project: Phoenix
  Issue Type: Task
Reporter: Pedro Boado
 Fix For: 5.1.0-cdh


Tests are failing under 
MutableIndexIT_localIndex=*,transactionProvider=null,columnEncoded=*





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5163) MutableIndexSplitForwardScanIT and MutableIndexSplitBackwardScanIT are failing in CDH6 branch

2019-02-25 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-5163:


 Summary: MutableIndexSplitForwardScanIT and 
MutableIndexSplitBackwardScanIT are failing in CDH6 branch
 Key: PHOENIX-5163
 URL: https://issues.apache.org/jira/browse/PHOENIX-5163
 Project: Phoenix
  Issue Type: Task
Reporter: Pedro Boado
 Fix For: 5.1.0-cdh


Both tests are failing in [localIndex=true, multiTenant=*] test runs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5161) ChangePermissionsIT is failing in CDH6 branch

2019-02-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5161:
-
Description: 
{{ChangePermissionsIT.testReadPermsOnTableIndexAndView}} is failing in cdh6 
branch - it keeps looping trying to shutdown minicluster 


  was:
{{ChangePermissionsIT.* [isNamespaceMapped=true]}} is failing in cdh6 branch
{code:java}

{code}



> ChangePermissionsIT is failing in CDH6 branch
> -
>
> Key: PHOENIX-5161
> URL: https://issues.apache.org/jira/browse/PHOENIX-5161
> Project: Phoenix
>  Issue Type: Task
>Reporter: Pedro Boado
>Priority: Major
> Fix For: 5.1.0-cdh
>
>
> {{ChangePermissionsIT.testReadPermsOnTableIndexAndView}} is failing in cdh6 
> branch - it keeps looping trying to shutdown minicluster 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5158) TestCoveredColumnIndexCodec is failing in CDH6 branch

2019-02-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5158:
-
Affects Version/s: (was: 5.1.0-cdh)
Fix Version/s: 5.1.0-cdh

> TestCoveredColumnIndexCodec is failing in CDH6 branch
> -
>
> Key: PHOENIX-5158
> URL: https://issues.apache.org/jira/browse/PHOENIX-5158
> Project: Phoenix
>  Issue Type: Task
>Reporter: Pedro Boado
>Priority: Major
> Fix For: 5.1.0-cdh
>
>
> {{TestCoveredColumnIndexCodec.testGeneratedIndexUpdates}} is failing in cdh6 
> branch
> {code:java}
> java.lang.AssertionError: Had some index updates, though it should have been 
> covered by the delete
> (...)
> org.apache.phoenix.hbase.index.covered.TestCoveredColumnIndexCodec.ensureNoUpdatesWhenCoveredByDelete(TestCoveredColumnIndexCodec.java:243)
>   at 
> org.apache.phoenix.hbase.index.covered.TestCoveredColumnIndexCodec.testGeneratedIndexUpdates(TestCoveredColumnIndexCodec.java:221)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5160) ConcurrentMutationsIT is failing in CDH6 branch

2019-02-25 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-5160:


 Summary: ConcurrentMutationsIT is failing in CDH6 branch
 Key: PHOENIX-5160
 URL: https://issues.apache.org/jira/browse/PHOENIX-5160
 Project: Phoenix
  Issue Type: Task
Reporter: Pedro Boado
 Fix For: 5.1.0-cdh


{{ConcurrentMutationsIT.testDeleteRowAndUpsertValueAtSameTS1}} is failing in 
cdh6 branch
{code:java}
java.lang.AssertionError: Expected to find PK in data table: ('aa','aa')
(...)
at 
org.apache.phoenix.util.IndexScrutiny.scrutinizeIndex(IndexScrutiny.java:150)
at 
org.apache.phoenix.end2end.ConcurrentMutationsIT.testDeleteRowAndUpsertValueAtSameTS1(ConcurrentMutationsIT.java:650)
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5162) HashJoinMoreIT is failing in CDH6 branch

2019-02-25 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-5162:


 Summary: HashJoinMoreIT is failing in CDH6 branch
 Key: PHOENIX-5162
 URL: https://issues.apache.org/jira/browse/PHOENIX-5162
 Project: Phoenix
  Issue Type: Task
Reporter: Pedro Boado
 Fix For: 5.1.0-cdh


{{HashJoinMoreIT.testBug2961}} is failing in cdh6 branch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5161) ChangePermissionsIT is failing in CDH6 branch

2019-02-25 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-5161:


 Summary: ChangePermissionsIT is failing in CDH6 branch
 Key: PHOENIX-5161
 URL: https://issues.apache.org/jira/browse/PHOENIX-5161
 Project: Phoenix
  Issue Type: Task
Reporter: Pedro Boado
 Fix For: 5.1.0-cdh


{{ChangePermissionsIT.* [isNamespaceMapped=true]}} is failing in cdh6 branch
{code:java}

{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5159) TableDDLPermissionsIT is failing in CDH6 branch

2019-02-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5159:
-
Affects Version/s: (was: 5.1.0-cdh)
Fix Version/s: 5.1.0-cdh

> TableDDLPermissionsIT is failing in CDH6 branch
> ---
>
> Key: PHOENIX-5159
> URL: https://issues.apache.org/jira/browse/PHOENIX-5159
> Project: Phoenix
>  Issue Type: Task
>Reporter: Pedro Boado
>Priority: Major
> Fix For: 5.1.0-cdh
>
>
> {{TableDDLPermissionsIT.testAutomaticGrantWithIndexAndView[isNamespaceMapped=true]}}
>  is failing in cdh6 branch - keeps looping 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5159) TableDDLPermissionsIT is failing in CDH6 branch

2019-02-25 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-5159:


 Summary: TableDDLPermissionsIT is failing in CDH6 branch
 Key: PHOENIX-5159
 URL: https://issues.apache.org/jira/browse/PHOENIX-5159
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.1.0-cdh
Reporter: Pedro Boado


{{TableDDLPermissionsIT.testAutomaticGrantWithIndexAndView[isNamespaceMapped=true]}}
 is failing in cdh6 branch - keeps looping 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5158) TestCoveredColumnIndexCodec is failing in CDH6 branch

2019-02-25 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-5158:


 Summary: TestCoveredColumnIndexCodec is failing in CDH6 branch
 Key: PHOENIX-5158
 URL: https://issues.apache.org/jira/browse/PHOENIX-5158
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.1.0
Reporter: Pedro Boado


{{TestCoveredColumnIndexCodec.testGeneratedIndexUpdates}} is failing in cdh6 
branch
{code:java}
java.lang.AssertionError: Had some index updates, though it should have been 
covered by the delete
(...)
org.apache.phoenix.hbase.index.covered.TestCoveredColumnIndexCodec.ensureNoUpdatesWhenCoveredByDelete(TestCoveredColumnIndexCodec.java:243)
at 
org.apache.phoenix.hbase.index.covered.TestCoveredColumnIndexCodec.testGeneratedIndexUpdates(TestCoveredColumnIndexCodec.java:221)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5158) TestCoveredColumnIndexCodec is failing in CDH6 branch

2019-02-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5158:
-
Affects Version/s: (was: 5.1.0)
   5.1.0-cdh

> TestCoveredColumnIndexCodec is failing in CDH6 branch
> -
>
> Key: PHOENIX-5158
> URL: https://issues.apache.org/jira/browse/PHOENIX-5158
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.1.0-cdh
>Reporter: Pedro Boado
>Priority: Major
>
> {{TestCoveredColumnIndexCodec.testGeneratedIndexUpdates}} is failing in cdh6 
> branch
> {code:java}
> java.lang.AssertionError: Had some index updates, though it should have been 
> covered by the delete
> (...)
> org.apache.phoenix.hbase.index.covered.TestCoveredColumnIndexCodec.ensureNoUpdatesWhenCoveredByDelete(TestCoveredColumnIndexCodec.java:243)
>   at 
> org.apache.phoenix.hbase.index.covered.TestCoveredColumnIndexCodec.testGeneratedIndexUpdates(TestCoveredColumnIndexCodec.java:221)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4956) Distribution of Apache Phoenix 5.1 for CDH 6.1

2019-02-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4956:
-
Attachment: PHOENIX-4956-FINAL.patch

> Distribution of Apache Phoenix 5.1 for CDH 6.1
> --
>
> Key: PHOENIX-4956
> URL: https://issues.apache.org/jira/browse/PHOENIX-4956
> Project: Phoenix
>  Issue Type: Task
>Reporter: Curtis Howard
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: PHOENIX-4956-FINAL.patch, PHOENIX-4956.patch
>
>
> Integration of Phoenix 5.x using CDH 6 dependencies



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4956) Distribution of Apache Phoenix 5.1 for CDH 6.1

2019-02-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4956:
-
Attachment: (was: PHOENIX-4956-v2.patch)

> Distribution of Apache Phoenix 5.1 for CDH 6.1
> --
>
> Key: PHOENIX-4956
> URL: https://issues.apache.org/jira/browse/PHOENIX-4956
> Project: Phoenix
>  Issue Type: Task
>Reporter: Curtis Howard
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: PHOENIX-4956.patch
>
>
> Integration of Phoenix 5.x using CDH 6 dependencies



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4956) Distribution of Apache Phoenix 5.1 for CDH 6.1

2019-02-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4956:
-
Summary: Distribution of Apache Phoenix 5.1 for CDH 6.1  (was: Distribution 
of Apache Phoenix 5.0 for CDH 6.0)

> Distribution of Apache Phoenix 5.1 for CDH 6.1
> --
>
> Key: PHOENIX-4956
> URL: https://issues.apache.org/jira/browse/PHOENIX-4956
> Project: Phoenix
>  Issue Type: Task
>Reporter: Curtis Howard
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: PHOENIX-4956-v2.patch, PHOENIX-4956.patch
>
>
> Integration of Phoenix 5.x using CDH 6 dependencies



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4956) Distribution of Apache Phoenix 5.0 for CDH 6.0

2019-02-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4956:
-
Attachment: PHOENIX-4956-v2.patch

> Distribution of Apache Phoenix 5.0 for CDH 6.0
> --
>
> Key: PHOENIX-4956
> URL: https://issues.apache.org/jira/browse/PHOENIX-4956
> Project: Phoenix
>  Issue Type: Task
>Reporter: Curtis Howard
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: PHOENIX-4956-v2.patch, PHOENIX-4956.patch
>
>
> Integration of Phoenix 5.x using CDH 6 dependencies



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4450) When I use phoenix queary below my project appeared on such an error Can anyone help me?

2019-02-01 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado resolved PHOENIX-4450.
--
Resolution: Not A Problem
  Assignee: Pedro Boado

Not enough evidence of a problem provided. It looks like a development issue. 

> When I use phoenix queary below my project appeared on such an error Can 
> anyone help me?
> 
>
> Key: PHOENIX-4450
> URL: https://issues.apache.org/jira/browse/PHOENIX-4450
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: David New
>Assignee: Pedro Boado
>Priority: Critical
>  Labels: jdbc, phoenix, thin
>
>  
> {code:java}
> Class.forName("org.apache.phoenix.queryserver.client.Driver");
>Connection conn= 
> DriverManager.getConnection("jdbc:phoenix:thin:url=http://192.168.0.1:8765;serialization=PROTOBUF;);
> String sqlerr="  SELECT   
> TO_CHAR(TO_DATE(SUCCESS_TIME,?),'-MM-dd') as success, "
> + "  COUNT(DISTINCT USER_ID) recharge_rs, "
> + "  COUNT(ID) recharge_rc, "
> + "  SUM(TO_NUMBER(ACTUAL_MONEY)) recharge_money "
> + "  FROM   RECHARGE "
> + "  WHERE   STATUS = 'success'   AND RECHARGE_WAY != 'admin' 
> "
> + "  GROUP BY   TO_CHAR(TO_DATE(SUCCESS_TIME,?),'-MM-dd') 
> ";
> PreparedStatement pstmt = conn.prepareStatement(sqlerr);
>pstmt.setString(1, "-MM-dd");
>pstmt.setString(2, "-MM-dd");
> ResultSet rs = pstmt.executeQuery();
> while (rs.next()) {
> System.out.println((rs.getString("success").toString()));
> }
> {code}
> 
> {code:java}
> AvaticaClientRuntimeException: Remote driver error: RuntimeException: 
> java.sql.SQLException: ERROR 2004 (INT05): Parameter value unbound. Parameter 
> at index 1 is unbound -> SQLException: ERROR 2004 (INT05): Parameter value 
> unbound. Parameter at index 1 is unbound. Error -1 (0) null
> java.lang.RuntimeException: java.sql.SQLException: ERROR 2004 (INT05): 
> Parameter value unbound. Parameter at index 1 is unbound
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:683)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.execute(JdbcMeta.java:880)
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:254)
>   at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1032)
>   at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1002)
>   at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
>   at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.sql.SQLException: ERROR 2004 (INT05): Parameter value 
> unbound. Parameter at index 1 is unbound
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:483)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.jdbc.PhoenixParameterMetaData.getParam(PhoenixParameterMetaData.java:89)
>   at 
> org.apache.phoenix.jdbc.PhoenixParameterMetaData.isSigned(PhoenixParameterMetaData.java:138)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcMeta.parameters(JdbcMeta.java:270)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.signature(JdbcMeta.java:282)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.execute(JdbcMeta.java:856)
>   ... 15 more
>   at 
> 

[jira] [Updated] (PHOENIX-5057) LocalIndexSplitMergeIT and MutableIndexIT are failing in cdh branches

2018-12-05 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5057:
-
Description: 
LocalIndexSplitMergeIT started failing in CDH branches after PHOENIX-4839. For 
some reason it looks like LocalIndexes are not splitting when the table is 
splitted. HBase-1.2 branch seems to be OK.



{code}
2018-12-03 21:02:03,007 DEBUG [phoenix-2-thread-3] 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas(205): Scan with 
primary region returns org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1108 (XCL08): Cache of 
region boundaries are out of date. tableName=T01.T02
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.throwIfScanOutOfRegion(BaseScannerRegionObserver.java:175)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.preScannerOpen(BaseScannerRegionObserver.java:203)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$50.call(RegionCoprocessorHost.java:1300)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1722)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1295)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2406)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
Caused by: org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 
1108 (XCL08): Cache of region boundaries are out of date. 
tableName=T01.T02
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.throwIfScanOutOfRegion(BaseScannerRegionObserver.java:174)
... 12 more
{code}


MutableIndexIT to be investigated, it also seems to be related to local index 
implementation after same PHOENIX-4839.



  was:
LocalIndexSplitMergeIT started failing in CDH branches after PHOENIX-4839. For 
some reason it looks like LocalIndexes are not splitting when the table is 
splitted. HBase-1.2 branch seems to be OK.

{code}
2018-12-03 21:02:03,007 DEBUG [phoenix-2-thread-3] 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas(205): Scan with 
primary region returns org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1108 (XCL08): Cache of 
region boundaries are out of date. tableName=T01.T02
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.throwIfScanOutOfRegion(BaseScannerRegionObserver.java:175)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.preScannerOpen(BaseScannerRegionObserver.java:203)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$50.call(RegionCoprocessorHost.java:1300)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1722)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1295)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2406)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
Caused by: org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 
1108 (XCL08): Cache of region boundaries are out of date. 
tableName=T01.T02
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.throwIfScanOutOfRegion(BaseScannerRegionObserver.java:174)
... 12 more
{code}


MutableIndexIT to be investigated, it also seems to be related to local index 
implementation after same PHOENIX-4839.




> 

[jira] [Updated] (PHOENIX-5057) LocalIndexSplitMergeIT and MutableIndexIT are failing in cdh branches

2018-12-03 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5057:
-
Description: 
LocalIndexSplitMergeIT started failing in CDH branches after PHOENIX-4839. For 
some reason it looks like LocalIndexes are not splitting when the table is 
splitted. HBase-1.2 branch seems to be OK.

{code}
2018-12-03 21:02:03,007 DEBUG [phoenix-2-thread-3] 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas(205): Scan with 
primary region returns org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1108 (XCL08): Cache of 
region boundaries are out of date. tableName=T01.T02
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.throwIfScanOutOfRegion(BaseScannerRegionObserver.java:175)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.preScannerOpen(BaseScannerRegionObserver.java:203)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$50.call(RegionCoprocessorHost.java:1300)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1722)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1295)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2406)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
Caused by: org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 
1108 (XCL08): Cache of region boundaries are out of date. 
tableName=T01.T02
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.throwIfScanOutOfRegion(BaseScannerRegionObserver.java:174)
... 12 more
{code}


MutableIndexIT to be investigated, it also seems to be related to local index 
implementation after same PHOENIX-4839.

  was:
LocalIndexSplitMergeIT started failing in CDH branches after PHOENIX-4839. For 
some reason it looks like LocalIndexes are not splitting when the table is 
splitted. HBase-1.2 branch seems to be OK.

MutableIndexIT to be investigated, it also seems to be related to local index 
implementation after same PHOENIX-4839.


> LocalIndexSplitMergeIT and MutableIndexIT are failing in cdh branches
> -
>
> Key: PHOENIX-5057
> URL: https://issues.apache.org/jira/browse/PHOENIX-5057
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.1
>Reporter: Pedro Boado
>Priority: Blocker
>  Labels: cdh
>
> LocalIndexSplitMergeIT started failing in CDH branches after PHOENIX-4839. 
> For some reason it looks like LocalIndexes are not splitting when the table 
> is splitted. HBase-1.2 branch seems to be OK.
> {code}
> 2018-12-03 21:02:03,007 DEBUG [phoenix-2-thread-3] 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas(205): Scan with 
> primary region returns org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1108 (XCL08): Cache of 
> region boundaries are out of date. tableName=T01.T02
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.throwIfScanOutOfRegion(BaseScannerRegionObserver.java:175)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.preScannerOpen(BaseScannerRegionObserver.java:203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$50.call(RegionCoprocessorHost.java:1300)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1722)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1295)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2406)
>   at 
> 

[jira] [Updated] (PHOENIX-5057) LocalIndexSplitMergeIT and MutableIndexIT are failing in cdh branches

2018-12-03 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5057:
-
Description: 
LocalIndexSplitMergeIT started failing in CDH branches after PHOENIX-4839. For 
some reason it looks like LocalIndexes are not splitting when the table is 
splitted. HBase-1.2 branch seems to be OK.

{code}
2018-12-03 21:02:03,007 DEBUG [phoenix-2-thread-3] 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas(205): Scan with 
primary region returns org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1108 (XCL08): Cache of 
region boundaries are out of date. tableName=T01.T02
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.throwIfScanOutOfRegion(BaseScannerRegionObserver.java:175)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.preScannerOpen(BaseScannerRegionObserver.java:203)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$50.call(RegionCoprocessorHost.java:1300)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1722)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1295)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2406)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
Caused by: org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 
1108 (XCL08): Cache of region boundaries are out of date. 
tableName=T01.T02
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.throwIfScanOutOfRegion(BaseScannerRegionObserver.java:174)
... 12 more
{code}


MutableIndexIT to be investigated, it also seems to be related to local index 
implementation after same PHOENIX-4839.



  was:
LocalIndexSplitMergeIT started failing in CDH branches after PHOENIX-4839. For 
some reason it looks like LocalIndexes are not splitting when the table is 
splitted. HBase-1.2 branch seems to be OK.

{code}
2018-12-03 21:02:03,007 DEBUG [phoenix-2-thread-3] 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas(205): Scan with 
primary region returns org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1108 (XCL08): Cache of 
region boundaries are out of date. tableName=T01.T02
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.throwIfScanOutOfRegion(BaseScannerRegionObserver.java:175)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.preScannerOpen(BaseScannerRegionObserver.java:203)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$50.call(RegionCoprocessorHost.java:1300)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1722)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1295)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2406)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
Caused by: org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 
1108 (XCL08): Cache of region boundaries are out of date. 
tableName=T01.T02
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.throwIfScanOutOfRegion(BaseScannerRegionObserver.java:174)
... 12 more
{code}


MutableIndexIT to be investigated, it also seems to be related to local index 
implementation after same PHOENIX-4839.


> 

[jira] [Updated] (PHOENIX-5057) LocalIndexSplitMergeIT and MutableIndexIT are failing in cdh branches

2018-12-03 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5057:
-
Description: 
LocalIndexSplitMergeIT started failing in CDH branches after PHOENIX-4839. For 
some reason it looks like LocalIndexes are not splitting when the table is 
splitted. HBase-1.2 branch seems to be OK.

MutableIndexIT to be investigated, it also seems to be related to local index 
implementation after same PHOENIX-4839.

  was:LocalIndexSplitMergeIT started failing in CDH branches after 
PHOENIX-4839. For some reason it looks like LocalIndexes are not splitting when 
the table is splitted. HBase-1.2 branch seems to be OK.

Summary: LocalIndexSplitMergeIT and MutableIndexIT are failing in cdh 
branches  (was: LocalIndexSplitMergeIT is faling in cdh branches)

> LocalIndexSplitMergeIT and MutableIndexIT are failing in cdh branches
> -
>
> Key: PHOENIX-5057
> URL: https://issues.apache.org/jira/browse/PHOENIX-5057
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.1
>Reporter: Pedro Boado
>Priority: Blocker
>  Labels: cdh
>
> LocalIndexSplitMergeIT started failing in CDH branches after PHOENIX-4839. 
> For some reason it looks like LocalIndexes are not splitting when the table 
> is splitted. HBase-1.2 branch seems to be OK.
> MutableIndexIT to be investigated, it also seems to be related to local index 
> implementation after same PHOENIX-4839.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5056) Ignore failing IT in 4.14-cdh branches

2018-12-03 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado resolved PHOENIX-5056.
--
   Resolution: Done
 Assignee: Pedro Boado
Fix Version/s: 4.14.1

Tests ignored, build completes OK now. Created blocking JIRA

> Ignore failing IT in 4.14-cdh branches
> --
>
> Key: PHOENIX-5056
> URL: https://issues.apache.org/jira/browse/PHOENIX-5056
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Blocker
>  Labels: cdh
> Fix For: 4.14.1
>
>
> A couple of tests are failing on 4.14-cdh branches. This ticket if for 
> bringing the branch back into a building status and also to act as an 
> umbrella to track further work on reenabling this tests.
> Two tests are currently failing ( both related to LocalIndexes not working 
> correctly after PHOENIX-4830) . This is not happening in Apache branches.. 
> Needs further investigation :
> phoenix-core/src/it/java/org/apache/phoenix/end2end/LocalIndexSplitMergeIT.java
> phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5057) LocalIndexSplitMergeIT is faling in cdh branches

2018-12-03 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-5057:


 Summary: LocalIndexSplitMergeIT is faling in cdh branches
 Key: PHOENIX-5057
 URL: https://issues.apache.org/jira/browse/PHOENIX-5057
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1, 4.15.0
Reporter: Pedro Boado


LocalIndexSplitMergeIT started failing in CDH branches after PHOENIX-4839. For 
some reason it looks like LocalIndexes are not splitting when the table is 
splitted. HBase-1.2 branch seems to be OK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5056) Ignore failing IT in 4.14-cdh branches

2018-12-03 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-5056:


 Summary: Ignore failing IT in 4.14-cdh branches
 Key: PHOENIX-5056
 URL: https://issues.apache.org/jira/browse/PHOENIX-5056
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1
Reporter: Pedro Boado


A couple of tests are failing on 4.14-cdh branches. This ticket if for bringing 
the branch back into a building status and also to act as an umbrella to track 
further work on reenabling this tests.

Two tests are currently failing ( both related to LocalIndexes not working 
correctly after PHOENIX-4830) . This is not happening in Apache branches.. 
Needs further investigation :

phoenix-core/src/it/java/org/apache/phoenix/end2end/LocalIndexSplitMergeIT.java
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5053) Ignore failing IT in 4.x-cdh branch

2018-11-30 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5053:
-
Summary: Ignore failing IT in 4.x-cdh branch  (was: Disable failing IT in 
4.x-cdh branch)

> Ignore failing IT in 4.x-cdh branch
> ---
>
> Key: PHOENIX-5053
> URL: https://issues.apache.org/jira/browse/PHOENIX-5053
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
>  Labels: cdh
>
> A few tests are failing on 4.x-cdh branch. This ticket if for bringing the 
> branch back into a building status and also to act as an umbrella to track 
> further work on reenabling this tests. 
> Also IT parallelism has been reduced to 4. 
> Four tests below are failing when building via Maven. Some of them work fine 
> when run independently. 
> {code:java}
> phoenix-core/src/it/java/org/apache/phoenix/end2end/LocalIndexSplitMergeIT.java
> phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
> phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
> phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5053) Disable failing IT in 4.x-cdh branch

2018-11-30 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-5053:


 Summary: Disable failing IT in 4.x-cdh branch
 Key: PHOENIX-5053
 URL: https://issues.apache.org/jira/browse/PHOENIX-5053
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0
Reporter: Pedro Boado
Assignee: Pedro Boado


A few tests are failing on 4.x-cdh branch. This ticket if for bringing the 
branch back into a building status and also to act as an umbrella to track 
further work on reenabling this tests. 

Also IT parallelism has been reduced to 4. 

Four tests below are failing when building via Maven. Some of them work fine 
when run independently. 

{code:java}
phoenix-core/src/it/java/org/apache/phoenix/end2end/LocalIndexSplitMergeIT.java
phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
{code}






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5049) Try to make tests ignored after PHOENIX-4981 pass again

2018-11-28 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5049:
-
Summary: Try to make tests ignored after PHOENIX-4981 pass again  (was: Try 
to make ignored tests pass after PHOENIX-4981)

> Try to make tests ignored after PHOENIX-4981 pass again
> ---
>
> Key: PHOENIX-5049
> URL: https://issues.apache.org/jira/browse/PHOENIX-5049
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Pedro Boado
>Priority: Major
>  Labels: cdh
> Fix For: 4.15.0
>
>
> After porting PHOENIX-4981 to 4.x-cdh an addendum commit was made to backport 
> changes to Spark 1.6 ( version shipped with cdh 5.15 ) . After changes were 
> made a few tests started failing and were ignored in the commit. 
> {code}
> AggregateIT.testExpressionInGroupBy
> AggregateIT.testGroupByCase
> AggregateIT.testGroupByDescColumnWithNullsLastBug3452
> {code}
> Following three testcases are using syntax not available in Spark 1.6  hence 
> will be permanently ignored.
> {code}
> OrderByIT.testDescMultiOrderByExpr
> OrderByIT.testNullsLastWithDesc
> OrderByIT.testOrderByReverseOptimizationWithNullsLast 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5049) Try to make ignored tests pass after PHOENIX-4981

2018-11-28 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-5049:


 Summary: Try to make ignored tests pass after PHOENIX-4981
 Key: PHOENIX-5049
 URL: https://issues.apache.org/jira/browse/PHOENIX-5049
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0
Reporter: Pedro Boado
 Fix For: 4.15.0


After porting PHOENIX-4981 to 4.x-cdh an addendum commit was made to backport 
changes to Spark 1.6 ( version shipped with cdh 5.15 ) . After changes were 
made a few tests started failing and were ignored in the commit. 

{code}
AggregateIT.testExpressionInGroupBy
AggregateIT.testGroupByCase
AggregateIT.testGroupByDescColumnWithNullsLastBug3452
{code}

Following three testcases are using syntax not available in Spark 1.6  hence 
will be permanently ignored.
{code}
OrderByIT.testDescMultiOrderByExpr
OrderByIT.testNullsLastWithDesc
OrderByIT.testOrderByReverseOptimizationWithNullsLast 
{code}





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4784) Downloads page on website should list xsums/sigs for "active" releases

2018-11-28 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado resolved PHOENIX-4784.
--
Resolution: Fixed

Seems ok now , new version 4.14.1 has been pushed since then with no further 
issues. 

> Downloads page on website should list xsums/sigs for "active" releases
> --
>
> Key: PHOENIX-4784
> URL: https://issues.apache.org/jira/browse/PHOENIX-4784
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Pedro Boado
>Priority: Blocker
>
> I was made aware that our downloads page only links to the closer.cgi script 
> and does not proactively point users towards the xsums+sigs hosted (only) on 
> dist.a.o.
> We need to update our website such that we can be confident in saying that we 
> showed users how they need to validate our releases they download from 
> third-party mirrors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5047) can't upgrade phoenix from 4.13 to 4.14.1

2018-11-28 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5047:
-
Environment: 
custom build (CB) of 4.13 on top of cdh 5.13.0 , upgrading to CB of 4.14.1 on 
top of hbase cdh 5.14.2 ( 


  was:
4.13 on top of cdh 5.13.0
upgrading to 4.14.1 on top of hbase cdh 5.14.2



> can't upgrade phoenix from 4.13 to 4.14.1
> -
>
> Key: PHOENIX-5047
> URL: https://issues.apache.org/jira/browse/PHOENIX-5047
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
> Environment: custom build (CB) of 4.13 on top of cdh 5.13.0 , 
> upgrading to CB of 4.14.1 on top of hbase cdh 5.14.2 ( 
>Reporter: Ievgen Nekrashevych
>Priority: Major
>  Labels: cdh
>
> The upgrade scenario as following:
> install phoenix 4.13 on top of hbase 1.2.0-cdh5.13.0. Run simple script to 
> make sure some data is there:
> {code}
> -- system tables are created on the first connection
> create schema if not exists TS
> create table if not exists TS.TEST (STR varchar not null,INTCOL bigint not 
> null, STARTTIME integer, DUMMY integer default 0 CONSTRAINT PK PRIMARY KEY 
> (STR, INTCOL))
> create local index if not exists "TEST_INDEX" on TS.TEST (STR,STARTTIME)
> upsert into TS.TEST(STR,INTCOL,STARTTIME,DUMMY) values ('TEST',4,1,3)
> -- made sure there is a data
> select * from TS.TEST
> {code}
> then I shut down everything (queryserver, regionserver, master and 
> zookeeper), install hbase 1.2.0-cdh5.14.2, replace phoenix libs with 4.14.1 
> and start servers. Trying to connect to the server and run:
> {code}
> select * from TS.TEST
> {code}
> I get:
> {code}
> 2018-11-28 07:53:03,088 ERROR 
> [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=60020] 
> coprocessor.MetaDataEndpointImpl: Add column failed: 
> org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM:CATALOG: 63
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:2368)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addColumn(MetaDataEndpointImpl.java:3242)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16402)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7931)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1969)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1951)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 63
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)
> at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)
> at 
> org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1073)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:614)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:2361)
> ... 10 more
> {code}
> In subsequent calls I get same exception with slightly different message that 
> I've got different versions of client and server jars (with 
> ArrayIndexOutOfBoundsException as cause, and only 
> ArrayIndexOutOfBoundsException in server logs), which is not true.
> Serverside exception:
> {code}
> 2018-11-28 08:45:00,611 ERROR 
> [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=60020] 
> coprocessor.MetaDataEndpointImpl: loading system catalog table inside 
> getVersion failed
> java.lang.ArrayIndexOutOfBoundsException: 63
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)
> at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)
> at 
> org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1073)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:614)
> 

[jira] [Updated] (PHOENIX-5047) can't upgrade phoenix from 4.13 to 4.14.1

2018-11-28 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5047:
-
Labels: cdh  (was: )

> can't upgrade phoenix from 4.13 to 4.14.1
> -
>
> Key: PHOENIX-5047
> URL: https://issues.apache.org/jira/browse/PHOENIX-5047
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
> Environment: 4.13 on top of cdh 5.13.0
> upgrading to 4.14.1 on top of hbase cdh 5.14.2
>Reporter: Ievgen Nekrashevych
>Priority: Major
>  Labels: cdh
>
> The upgrade scenario as following:
> install phoenix 4.13 on top of hbase 1.2.0-cdh5.13.0. Run simple script to 
> make sure some data is there:
> {code}
> -- system tables are created on the first connection
> create schema if not exists TS
> create table if not exists TS.TEST (STR varchar not null,INTCOL bigint not 
> null, STARTTIME integer, DUMMY integer default 0 CONSTRAINT PK PRIMARY KEY 
> (STR, INTCOL))
> create local index if not exists "TEST_INDEX" on TS.TEST (STR,STARTTIME)
> upsert into TS.TEST(STR,INTCOL,STARTTIME,DUMMY) values ('TEST',4,1,3)
> -- made sure there is a data
> select * from TS.TEST
> {code}
> then I shut down everything (queryserver, regionserver, master and 
> zookeeper), install hbase 1.2.0-cdh5.14.2, replace phoenix libs with 4.14.1 
> and start servers. Trying to connect to the server and run:
> {code}
> select * from TS.TEST
> {code}
> I get:
> {code}
> 2018-11-28 07:53:03,088 ERROR 
> [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=60020] 
> coprocessor.MetaDataEndpointImpl: Add column failed: 
> org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM:CATALOG: 63
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:2368)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addColumn(MetaDataEndpointImpl.java:3242)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16402)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7931)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1969)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1951)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 63
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)
> at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)
> at 
> org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1073)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:614)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:2361)
> ... 10 more
> {code}
> In subsequent calls I get same exception with slightly different message that 
> I've got different versions of client and server jars (with 
> ArrayIndexOutOfBoundsException as cause, and only 
> ArrayIndexOutOfBoundsException in server logs), which is not true.
> Serverside exception:
> {code}
> 2018-11-28 08:45:00,611 ERROR 
> [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=60020] 
> coprocessor.MetaDataEndpointImpl: loading system catalog table inside 
> getVersion failed
> java.lang.ArrayIndexOutOfBoundsException: 63
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)
> at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)
> at 
> org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1073)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:614)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1339)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3721)
>

[jira] [Updated] (PHOENIX-5047) can't upgrade phoenix from 4.13 to 4.14.1

2018-11-28 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-5047:
-
Priority: Major  (was: Blocker)

> can't upgrade phoenix from 4.13 to 4.14.1
> -
>
> Key: PHOENIX-5047
> URL: https://issues.apache.org/jira/browse/PHOENIX-5047
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
> Environment: 4.13 on top of cdh 5.13.0
> upgrading to 4.14.1 on top of hbase cdh 5.14.2
>Reporter: Ievgen Nekrashevych
>Priority: Major
>  Labels: cdh
>
> The upgrade scenario as following:
> install phoenix 4.13 on top of hbase 1.2.0-cdh5.13.0. Run simple script to 
> make sure some data is there:
> {code}
> -- system tables are created on the first connection
> create schema if not exists TS
> create table if not exists TS.TEST (STR varchar not null,INTCOL bigint not 
> null, STARTTIME integer, DUMMY integer default 0 CONSTRAINT PK PRIMARY KEY 
> (STR, INTCOL))
> create local index if not exists "TEST_INDEX" on TS.TEST (STR,STARTTIME)
> upsert into TS.TEST(STR,INTCOL,STARTTIME,DUMMY) values ('TEST',4,1,3)
> -- made sure there is a data
> select * from TS.TEST
> {code}
> then I shut down everything (queryserver, regionserver, master and 
> zookeeper), install hbase 1.2.0-cdh5.14.2, replace phoenix libs with 4.14.1 
> and start servers. Trying to connect to the server and run:
> {code}
> select * from TS.TEST
> {code}
> I get:
> {code}
> 2018-11-28 07:53:03,088 ERROR 
> [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=60020] 
> coprocessor.MetaDataEndpointImpl: Add column failed: 
> org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM:CATALOG: 63
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:2368)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addColumn(MetaDataEndpointImpl.java:3242)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16402)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7931)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1969)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1951)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 63
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)
> at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)
> at 
> org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1073)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:614)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:2361)
> ... 10 more
> {code}
> In subsequent calls I get same exception with slightly different message that 
> I've got different versions of client and server jars (with 
> ArrayIndexOutOfBoundsException as cause, and only 
> ArrayIndexOutOfBoundsException in server logs), which is not true.
> Serverside exception:
> {code}
> 2018-11-28 08:45:00,611 ERROR 
> [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=60020] 
> coprocessor.MetaDataEndpointImpl: loading system catalog table inside 
> getVersion failed
> java.lang.ArrayIndexOutOfBoundsException: 63
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)
> at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)
> at 
> org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1073)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:614)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1339)
> at 
> 

[jira] [Updated] (PHOENIX-4556) Sync branch 4.x-cdh5.11.2

2018-11-27 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4556:
-
Labels: cdh  (was: )

> Sync branch 4.x-cdh5.11.2
> -
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
>  Labels: cdh
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4556-patch.tar.gz
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  and version up to 
> 4.14 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4670) Extend CDH parcel compatibility to minor versions

2018-11-27 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4670:
-
Labels: cdh  (was: )

> Extend CDH parcel compatibility to minor versions
> -
>
> Key: PHOENIX-4670
> URL: https://issues.apache.org/jira/browse/PHOENIX-4670
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.2-cdh5.11.2
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
>  Labels: cdh
>
> In order to start supporting a wider range of CDH versions first step is 
> increasing parcel compatibility from fix ( cdh5.11.2 ) to minor ( cdh5.11 ) . 
> This requires a minor change in parcel.json file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4777) Fix rat:check failure on CDH branches

2018-11-27 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4777:
-
Labels: cdh  (was: )

> Fix rat:check failure on CDH branches
> -
>
> Key: PHOENIX-4777
> URL: https://issues.apache.org/jira/browse/PHOENIX-4777
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
>  Labels: cdh
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4777.patch
>
>
> RAT plugin is currently failing because of file python/.gitignore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5043) PhoenixSparkIT fails on branch 4.15-cdh

2018-11-26 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-5043:


 Summary: PhoenixSparkIT fails on branch 4.15-cdh
 Key: PHOENIX-5043
 URL: https://issues.apache.org/jira/browse/PHOENIX-5043
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0
Reporter: Pedro Boado
 Fix For: 4.15.0


PhoenixSparkIT fails on branch 4.15-cdh

{code}
- Spark SQL can use Phoenix as a data source with PrunedFilteredScan *** FAILED 
***
  "Project [ID#61L]
  +- Scan PhoenixRelation(TABLE1,localhost:60200:/hbase,false)[ID#61L] 
PushedFilters: [EqualTo(COL1,test_row_1), EqualTo(ID,1)]
  " did not contain "PushedFilters: [IsNotNull(COL1), IsNotNull(ID), 
EqualTo(COL1,test_row_1), EqualTo(ID,1)]" (PhoenixSparkIT.scala:290)
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4956) Distribution of Apache Phoenix 5.0 for CDH 6.0

2018-11-25 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado reassigned PHOENIX-4956:


Assignee: Pedro Boado

> Distribution of Apache Phoenix 5.0 for CDH 6.0
> --
>
> Key: PHOENIX-4956
> URL: https://issues.apache.org/jira/browse/PHOENIX-4956
> Project: Phoenix
>  Issue Type: Task
>Reporter: Curtis Howard
>Assignee: Pedro Boado
>Priority: Minor
>
> Integration of Phoenix 5.x using CDH 6 dependencies



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4784) Downloads page on website should list xsums/sigs for "active" releases

2018-06-16 Thread Pedro Boado (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514887#comment-16514887
 ] 

Pedro Boado commented on PHOENIX-4784:
--

[~elserj] I've published a new version of the page. Any comments?

> Downloads page on website should list xsums/sigs for "active" releases
> --
>
> Key: PHOENIX-4784
> URL: https://issues.apache.org/jira/browse/PHOENIX-4784
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Pedro Boado
>Priority: Blocker
>
> I was made aware that our downloads page only links to the closer.cgi script 
> and does not proactively point users towards the xsums+sigs hosted (only) on 
> dist.a.o.
> We need to update our website such that we can be confident in saying that we 
> showed users how they need to validate our releases they download from 
> third-party mirrors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4784) Downloads page on website should list xsums/sigs for "active" releases

2018-06-16 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado reassigned PHOENIX-4784:


Assignee: Pedro Boado  (was: Josh Elser)

> Downloads page on website should list xsums/sigs for "active" releases
> --
>
> Key: PHOENIX-4784
> URL: https://issues.apache.org/jira/browse/PHOENIX-4784
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Pedro Boado
>Priority: Blocker
>
> I was made aware that our downloads page only links to the closer.cgi script 
> and does not proactively point users towards the xsums+sigs hosted (only) on 
> dist.a.o.
> We need to update our website such that we can be confident in saying that we 
> showed users how they need to validate our releases they download from 
> third-party mirrors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4784) Downloads page on website should list xsums/sigs for "active" releases

2018-06-16 Thread Pedro Boado (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514862#comment-16514862
 ] 

Pedro Boado commented on PHOENIX-4784:
--

I'll take care of it Josh

> Downloads page on website should list xsums/sigs for "active" releases
> --
>
> Key: PHOENIX-4784
> URL: https://issues.apache.org/jira/browse/PHOENIX-4784
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Pedro Boado
>Priority: Blocker
>
> I was made aware that our downloads page only links to the closer.cgi script 
> and does not proactively point users towards the xsums+sigs hosted (only) on 
> dist.a.o.
> We need to update our website such that we can be confident in saying that we 
> showed users how they need to validate our releases they download from 
> third-party mirrors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4777) Fix rat:check failure on CDH branches

2018-06-11 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado resolved PHOENIX-4777.
--
   Resolution: Fixed
Fix Version/s: 4.15.0

> Fix rat:check failure on CDH branches
> -
>
> Key: PHOENIX-4777
> URL: https://issues.apache.org/jira/browse/PHOENIX-4777
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4777.patch
>
>
> RAT plugin is currently failing because of file python/.gitignore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4777) Fix rat:check failure on CDH branches

2018-06-11 Thread Pedro Boado (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508929#comment-16508929
 ] 

Pedro Boado commented on PHOENIX-4777:
--

CDH branches don't have org.apache:apache:14 as a parent but cdh parent project 
which defines plugin version to be used as 0.6 instead of 0.10 . Defining it 
back to 0.10 solves the issue. 

> Fix rat:check failure on CDH branches
> -
>
> Key: PHOENIX-4777
> URL: https://issues.apache.org/jira/browse/PHOENIX-4777
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: PHOENIX-4777.patch
>
>
> RAT plugin is currently failing because of file python/.gitignore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4777) Fix rat:check failure on CDH branches

2018-06-11 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4777:
-
Attachment: PHOENIX-4777.patch

> Fix rat:check failure on CDH branches
> -
>
> Key: PHOENIX-4777
> URL: https://issues.apache.org/jira/browse/PHOENIX-4777
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: PHOENIX-4777.patch
>
>
> RAT plugin is currently failing because of file python/.gitignore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4776) Remove creation of .md5 signatures from dev/make_rc.sh in all branches

2018-06-11 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado resolved PHOENIX-4776.
--
   Resolution: Fixed
Fix Version/s: 4.15.0
   5.0.0

Applied to all branches

> Remove creation of .md5 signatures from dev/make_rc.sh in all branches 
> ---
>
> Key: PHOENIX-4776
> URL: https://issues.apache.org/jira/browse/PHOENIX-4776
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-4776.patch
>
>
> https://checker.apache.org/projs/phoenix.html marks as warnings having .md5 
> signatures in the release repo. These signatures are no longer required as we 
> already have sha256&512. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4776) Remove creation of .md5 signatures from dev/make_rc.sh in all branches

2018-06-11 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4776:
-
Attachment: PHOENIX-4776.patch

> Remove creation of .md5 signatures from dev/make_rc.sh in all branches 
> ---
>
> Key: PHOENIX-4776
> URL: https://issues.apache.org/jira/browse/PHOENIX-4776
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: PHOENIX-4776.patch
>
>
> https://checker.apache.org/projs/phoenix.html marks as warnings having .md5 
> signatures in the release repo. These signatures are no longer required as we 
> already have sha256&512. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4776) Remove creation of .md5 signatures from dev/make_rc.sh in all branches

2018-06-11 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4776:
-
Summary: Remove creation of .md5 signatures from dev/make_rc.sh in all 
branches   (was: Remove .md5 creation from dev/make_rc.sh in all branches )

> Remove creation of .md5 signatures from dev/make_rc.sh in all branches 
> ---
>
> Key: PHOENIX-4776
> URL: https://issues.apache.org/jira/browse/PHOENIX-4776
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
>
> https://checker.apache.org/projs/phoenix.html marks as warnings having .md5 
> signatures in the release repo. These signatures are no longer required as we 
> already have sha256&512. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4776) Remove .md5 creation from dev/make_rc.sh in all branches

2018-06-11 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4776:
-
Description: https://checker.apache.org/projs/phoenix.html marks as 
warnings having .md5 signatures in the release repo. These signatures are no 
longer required as we already have sha256&512. 

> Remove .md5 creation from dev/make_rc.sh in all branches 
> -
>
> Key: PHOENIX-4776
> URL: https://issues.apache.org/jira/browse/PHOENIX-4776
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
>
> https://checker.apache.org/projs/phoenix.html marks as warnings having .md5 
> signatures in the release repo. These signatures are no longer required as we 
> already have sha256&512. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4775) Change dev/make_rc.sh script in branch HBase-1.1 to generate .sha256 & .sha512

2018-06-11 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado resolved PHOENIX-4775.
--
Resolution: Duplicate

Issue solved in PHOENIX-4542 but cherry pick had been missing until now. 

> Change dev/make_rc.sh script in branch HBase-1.1 to generate .sha256 & .sha512
> --
>
> Key: PHOENIX-4775
> URL: https://issues.apache.org/jira/browse/PHOENIX-4775
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
>
> Current dev/make_rc.sh script generates a .sha file instead of two .sha256 
> and .sha512 as per the new ASF checksum requirements



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-06-11 Thread Pedro Boado (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16508836#comment-16508836
 ] 

Pedro Boado commented on PHOENIX-1567:
--

[~elserj] it *does* replace the default artifact, but gpg:sign fails after 
generating it.  Just checkout  v4.14.0-HBase-1.4  ( in fact, any of the tags 
for this release ) and you'll see it. 

{code:java}
$ mvn clean deploy gpg:sign -DperformRelease=true 
-Dgpg.passphrase=MY_PASSPHRASE -Dgpg.keyname=MY_KEY] -DskipTests -P release -pl 
phoenix-core,phoenix-pig,phoenix-tracing-webapp,phoenix-queryserver,phoenix-spark,phoenix-flume,phoenix-pherf,phoenix-queryserver-client,phoenix-hive,phoenix-client,phoenix-server
 -am
{code}

fails with a 


{code}
[INFO] 
[INFO] 
[INFO] Building Phoenix Client 4.14.0-HBase-1.4
[INFO] 

(...) 

[WARNING] maven-shade-plugin has detected that some class files are
[WARNING] present in two or more JARs. When this happens, only one
[WARNING] single version of the class is copied to the uber jar.
[WARNING] Usually this is not harmful and you can skip these warnings,
[WARNING] otherwise try to manually exclude artifacts based on
[WARNING] mvn dependency:tree -Ddetail=true and the above output.
[WARNING] See http://maven.apache.org/plugins/maven-shade-plugin/
[INFO] Replacing 
/home/pedro/Development/workspace/phoenix-build/phoenix-client/target/phoenix-4.14.0-HBase-1.4-client.jar
 with 
/home/pedro/Development/workspace/phoenix-build/phoenix-client/target/phoenix-client-4.14.0-HBase-1.4-shaded.jar
[INFO] 
[INFO] --- maven-gpg-plugin:1.6:sign (sign-artifacts) @ phoenix-client ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Phoenix . SUCCESS [ 16.155 s]
[INFO] Phoenix Core ... SUCCESS [01:20 min]
[INFO] Phoenix - Flume  SUCCESS [ 16.794 s]
[INFO] Phoenix - Pig .. SUCCESS [01:04 min]
[INFO] Phoenix Query Server Client  SUCCESS [ 26.776 s]
[INFO] Phoenix Query Server ... SUCCESS [ 16.489 s]
[INFO] Phoenix - Pherf  SUCCESS [ 24.597 s]
[INFO] Phoenix - Spark  SUCCESS [ 34.231 s]
[INFO] Phoenix - Hive . SUCCESS [01:21 min]
[INFO] Phoenix Client . FAILURE [ 45.696 s]
[INFO] Phoenix Server . SKIPPED
[INFO] Phoenix - Tracing Web Application .. SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 06:49 min
[INFO] Finished at: 2018-06-11T23:03:32+01:00
[INFO] Final Memory: 185M/3330M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-gpg-plugin:1.6:sign (sign-artifacts) on project 
phoenix-client: The project artifact has not been assembled yet. Please do not 
invoke this goal before the lifecycle phase "package". -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :phoenix-client


{code}



> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-1567.patch
>
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> 

[jira] [Created] (PHOENIX-4777) Fix rat:check failure on CDH branches

2018-06-09 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-4777:


 Summary: Fix rat:check failure on CDH branches
 Key: PHOENIX-4777
 URL: https://issues.apache.org/jira/browse/PHOENIX-4777
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.14.0
Reporter: Pedro Boado
Assignee: Pedro Boado


RAT plugin is currently failing because of file python/.gitignore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4776) Remove .md5 creation from dev/make_rc.sh in all branches

2018-06-09 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-4776:


 Summary: Remove .md5 creation from dev/make_rc.sh in all branches 
 Key: PHOENIX-4776
 URL: https://issues.apache.org/jira/browse/PHOENIX-4776
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.14.0
Reporter: Pedro Boado
Assignee: Pedro Boado






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4775) Change dev/make_rc.sh script in branch HBase-1.1 to generate .sha256 & .sha512

2018-06-09 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-4775:


 Summary: Change dev/make_rc.sh script in branch HBase-1.1 to 
generate .sha256 & .sha512
 Key: PHOENIX-4775
 URL: https://issues.apache.org/jira/browse/PHOENIX-4775
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.14.0
Reporter: Pedro Boado
Assignee: Pedro Boado


Current dev/make_rc.sh script generates a .sha file instead of two .sha256 and 
.sha512 as per the new ASF checksum requirements



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-06-08 Thread Pedro Boado (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506716#comment-16506716
 ] 

Pedro Boado commented on PHOENIX-1567:
--

Just released 4.14.0 with the classifier. But I couldn't get it working without 
it .  This is the error that I am getting 

{code}
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 06:47 min
[INFO] Finished at: 2018-06-09T00:43:18+01:00
[INFO] Final Memory: 194M/3102M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-gpg-plugin:1.6:sign (default-cli) on project 
phoenix-client: The project artifact has not been assembled yet. Please do not 
invoke this goal before the lifecycle phase "package". -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 

{code}

We might get it fixed for next release. Any sugegstion?

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-1567.patch
>
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-06-08 Thread Pedro Boado (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506685#comment-16506685
 ] 

Pedro Boado commented on PHOENIX-1567:
--

[~an...@apache.org]  by changing that value this coordinate gets pushed ( with 
classifier shaded ) . No original artiffact is published - because it is 
replaced - 

{code}

  org.apache.phoenix
  phoenix-client
  4.14.0-cdh5.11.2
  shaded

{code}


What is the idea, getting it published with or without the classifier? 

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-1567.patch
>
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-06-08 Thread Pedro Boado (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado reopened PHOENIX-1567:
--

[~an...@apache.org] 

When releasing 4.14 we realized that the gpg:sign for both phoenix-client and 
phoenix-server artifacts was failing. 

The issue is solved by changing maven-shaded-plugin configuration for both 
modules to 

{code}
true
{code}

Is there any reason for not having it attached in the current code?

Thanks 

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-1567.patch
>
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3163) Split during global index creation may cause ERROR 201 error

2018-05-13 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473653#comment-16473653
 ] 

Pedro Boado commented on PHOENIX-3163:
--

Hi [~sergey.soldatov] 

I've just noticed that SkipScanAfterManualSplitIT.testManualSplit started 
failing in branch 4.x-HBase-1.2 .
{code:java}
2018-05-13 23:41:03,729 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=43819] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=1,queue=0,port=43819: callId: 465 service: 
ClientService methodName: Scan size: 595 connection: 127.0.0.1:57462
org.apache.hadoop.hbase.NotServingRegionException: Region 
T02,\x01,1526251248024.34b289cddcb2b99d8e776602b796d731. is not online on 
xps,43819,1526251221783
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2942)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1072)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2410)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2188)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
2018-05-13 23:41:03,729 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=43819] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=2,queue=0,port=43819: callId: 467 service: 
ClientService methodName: Scan size: 586 connection: 127.0.0.1:57462
org.apache.hadoop.hbase.NotServingRegionException: Region 
T02,\x02,1526251248024.9201bdc4f44225f390edb40ab1548a82. is not online on 
xps,43819,1526251221783
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2942)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1072)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2410)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2188)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
2018-05-13 23:41:03,729 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=43819] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=0,queue=0,port=43819: callId: 466 service: 
ClientService methodName: Scan size: 585 connection: 127.0.0.1:57462
org.apache.hadoop.hbase.NotServingRegionException: Region 
T02,,1526251248024.9bb19fa73f91248dd407192c4ce512fe. is not online on 
xps,43819,1526251221783
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2942)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1072)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2410)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2188)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
2018-05-13 23:41:03,729 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=43819] 
org.apache.hadoop.hbase.ipc.CallRunner(115): 
B.defaultRpcServer.handler=3,queue=0,port=43819: callId: 468 service: 
ClientService methodName: Scan size: 595 connection: 127.0.0.1:57462
org.apache.hadoop.hbase.NotServingRegionException: Region 
T02,\x03,1526251248024.6944b7e5e33cdcbdcc674c745ad8c1a5. is not online on 
xps,43819,1526251221783
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2942)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1072)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2410)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2188)
at 

[jira] [Commented] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459319#comment-16459319
 ] 

Pedro Boado commented on PHOENIX-4719:
--

Patch attached. [~jamestaylor] can you review ? 
The issue was detected in one of cdh branches but it could have happened with 
any other HBase version. Would you push the change to all branches?

> Avoid static initialization deadlock while loading regions
> --
>
> Key: PHOENIX-4719
> URL: https://issues.apache.org/jira/browse/PHOENIX-4719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
> Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: PHOENIX-4719.patch, dump-rs.log
>
>
> HBase cluster initialization appears to fail as RS is not able to serve all 
> table regions. 
> Almost all table regions are stuck in transition waiting for the first three 
> regions to be opened. After a while the process times out and RS fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4719:
-
Attachment: PHOENIX-4719.patch

> Avoid static initialization deadlock while loading regions
> --
>
> Key: PHOENIX-4719
> URL: https://issues.apache.org/jira/browse/PHOENIX-4719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
> Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: PHOENIX-4719.patch, dump-rs.log
>
>
> HBase cluster initialization appears to fail as RS is not able to serve all 
> table regions. 
> Almost all table regions are stuck in transition waiting for the first three 
> regions to be opened. After a while the process times out and RS fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4719:
-
Attachment: (was: phoenix.iml)

> Avoid static initialization deadlock while loading regions
> --
>
> Key: PHOENIX-4719
> URL: https://issues.apache.org/jira/browse/PHOENIX-4719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
> Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: PHOENIX-4719.patch, dump-rs.log
>
>
> HBase cluster initialization appears to fail as RS is not able to serve all 
> table regions. 
> Almost all table regions are stuck in transition waiting for the first three 
> regions to be opened. After a while the process times out and RS fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4719:
-
Attachment: phoenix.iml

> Avoid static initialization deadlock while loading regions
> --
>
> Key: PHOENIX-4719
> URL: https://issues.apache.org/jira/browse/PHOENIX-4719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
> Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: PHOENIX-4719.patch, dump-rs.log
>
>
> HBase cluster initialization appears to fail as RS is not able to serve all 
> table regions. 
> Almost all table regions are stuck in transition waiting for the first three 
> regions to be opened. After a while the process times out and RS fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4719:
-
Attachment: dump-rs.log

> Avoid static initialization deadlock while loading regions
> --
>
> Key: PHOENIX-4719
> URL: https://issues.apache.org/jira/browse/PHOENIX-4719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
> Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: dump-rs.log
>
>
> HBase cluster initialization appears to fail as RS is not able to serve all 
> table regions. 
> Almost all table regions are stuck in transition waiting for the first three 
> regions to be opened. After a while the process times out and RS fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459313#comment-16459313
 ] 

Pedro Boado commented on PHOENIX-4719:
--






RS reaches a static initialization deadlock between  
org.apache.phoenix.exception.SQLExceptionCode  and 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.

SQLException:246  uses a static member of PhoenixDatabaseMetadata . And  
PhoenixDatabaseMetadata:93  ( static field ) ends up accesing a static field 
from SQLException  when building  TableProperty:237

In the process this ends up also blocking ServerUtil:73 and indirectly 
DelegateRegionCoprocessorEnvironment:50 .
 [^dump-rs.log] 

> Avoid static initialization deadlock while loading regions
> --
>
> Key: PHOENIX-4719
> URL: https://issues.apache.org/jira/browse/PHOENIX-4719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
> Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: dump-rs.log
>
>
> HBase cluster initialization appears to fail as RS is not able to serve all 
> table regions. 
> Almost all table regions are stuck in transition waiting for the first three 
> regions to be opened. After a while the process times out and RS fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-4719:


 Summary: Avoid static initialization deadlock while loading regions
 Key: PHOENIX-4719
 URL: https://issues.apache.org/jira/browse/PHOENIX-4719
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0, 5.0.0
 Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
Reporter: Pedro Boado
Assignee: Pedro Boado


HBase cluster initialization appears to fail as RS is not able to serve all 
table regions. 

Almost all table regions are stuck in transition waiting for the first three 
regions to be opened. After a while the process times out and RS fails.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4689) cannot bind enum descriptor to a non-enum class

2018-04-13 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436985#comment-16436985
 ] 

Pedro Boado commented on PHOENIX-4689:
--

Hi, that version has neither released not supported by Apache Phoenix. Please 
raise the issue with Cloudera. 

We have started building packages for Phoenix and CDH starting with 4.13.2 and 
only for CDH 5.11.2 . Next versions will hopefully support a wider range of CDH 
versions starting in CDH 5.11.x 

> cannot bind enum descriptor to a non-enum class
> ---
>
> Key: PHOENIX-4689
> URL: https://issues.apache.org/jira/browse/PHOENIX-4689
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: phoenix-4.7.0-cdh5.5.1
> pig-0.12.0-cdh5.5.0
> hadoop-2.6.0-cdh5.5.0
> hbase.1.0-cdh5.5.0
>Reporter: ZhongyuWang
>Priority: Major
>
> [https://github.com/chiastic-security/phoenix-for-cloudera]
> I 'use phoenix-4.7.0-cdh5.5.1'phoenix version from above link in github, but 
> where i use pig to load data to HDFS from hbase with mapreduce , i got 
> "cannot bind enum descriptor to a non-enum class" error log. i can run it in 
> local mapreduce mode successfully.
> pig -x mapreduce example1.pig
> example1.pig
> REGISTER 
> /e3base/phoenix/phoenix-4.7.0-cdh5.5.1/phoenix-4.7.0-cdh5.5.1-client.jar;
> rows = load 'hbase://query/SELECT ID,ACCOUNT,PASSWD FROM USER' USING 
> org.apache.phoenix.pig.PhoenixHBaseLoader('KFAPP74:11001');
> STORE rows INTO 'USER.csv' USING PigStorage(',');
> Mapreduce error log
> [main] INFO 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
>  - Failed!
> [main] ERROR org.apache.pig.tools.grunt.GruntParser - ERROR 2997: Unable to 
> recreate exception from backed error: 
> AttemptID:attempt_1515656040682_0049_m_00_3 Info:Error: 
> java.io.IOException: Deserialization error: cannot bind enum descriptor to a 
> non-enum class
>  at 
> org.apache.pig.impl.util.ObjectSerializer.deserialize(ObjectSerializer.java:62)
>  at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.setup(PigGenericMapBase.java:171)
>  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.io.InvalidClassException: cannot bind enum descriptor to a 
> non-enum class
>  at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:608)
>  at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
>  at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1515)
>  at java.io.ObjectInputStream.readEnum(ObjectInputStream.java:1723)
>  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1345)
>  at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1989)
>  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1913)
>  at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
>  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1348)
>  at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
>  at 
> org.apache.pig.impl.util.ObjectSerializer.deserialize(ObjectSerializer.java:60)
>  ... 9 more
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4671) Fix minor size accounting bug for MutationSize

2018-03-30 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16421041#comment-16421041
 ] 

Pedro Boado edited comment on PHOENIX-4671 at 3/30/18 11:05 PM:


4.x-cdh5.11 is the "main" cdh branch , the other three (cdh5.12, 13&14) are 
being periodically rebased onto it ( so we keep having a difference of 1 commit 
)

Btw I cherry picked the commit from 4.x-Hbase-1.2 a few hours ago


was (Author: pboado):
4.x-cdh5.11 is the "main" cdh branch , the other three (cdh5.12, 13&14) are 
being periodically rebased onto it ( so we keep having a difference of 1 commit 
)

Btw I cherry picked the commit from 4.x-Hbase-1.2 a few hours

> Fix minor size accounting bug for MutationSize
> --
>
> Key: PHOENIX-4671
> URL: https://issues.apache.org/jira/browse/PHOENIX-4671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Minor
> Fix For: 4.14.0, 5.0.0
>
> Attachments: 4671-v2.txt, 4671.txt
>
>
> Just ran into a bug where UPSERT INTO table ... SELECT ... FROM table would 
> fail due to "Error: ERROR 730 (LIM02): MutationState size is bigger than 
> maximum allowed number of bytes (state=LIM02,code=730)" even with auto commit 
> on.
> Ran it through a debugger, just a simple accounting bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4671) Fix minor size accounting bug for MutationSize

2018-03-30 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16421041#comment-16421041
 ] 

Pedro Boado edited comment on PHOENIX-4671 at 3/30/18 11:01 PM:


4.x-cdh5.11 is the "main" cdh branch , the other three (cdh5.12, 13&14) are 
being periodically rebased onto it ( so we keep having a difference of 1 commit 
)

Btw I cherry picked the commit from 4.x-Hbase-1.2 a few hours


was (Author: pboado):
4.x-cdh5.11 is the "main" cdh branch , the other three (cdh5.12, 13&14) are 
being periodically rebased onto it ( so we keep having a difference of 1 commit 
)

> Fix minor size accounting bug for MutationSize
> --
>
> Key: PHOENIX-4671
> URL: https://issues.apache.org/jira/browse/PHOENIX-4671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Minor
> Fix For: 4.14.0, 5.0.0
>
> Attachments: 4671-v2.txt, 4671.txt
>
>
> Just ran into a bug where UPSERT INTO table ... SELECT ... FROM table would 
> fail due to "Error: ERROR 730 (LIM02): MutationState size is bigger than 
> maximum allowed number of bytes (state=LIM02,code=730)" even with auto commit 
> on.
> Ran it through a debugger, just a simple accounting bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4671) Fix minor size accounting bug for MutationSize

2018-03-30 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16421041#comment-16421041
 ] 

Pedro Boado commented on PHOENIX-4671:
--

4.x-cdh5.11 is the "main" cdh branch , the other three (cdh5.12, 13&14) are 
being periodically rebased onto it ( so we keep having a difference of 1 commit 
)

> Fix minor size accounting bug for MutationSize
> --
>
> Key: PHOENIX-4671
> URL: https://issues.apache.org/jira/browse/PHOENIX-4671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Minor
> Fix For: 4.14.0, 5.0.0
>
> Attachments: 4671-v2.txt, 4671.txt
>
>
> Just ran into a bug where UPSERT INTO table ... SELECT ... FROM table would 
> fail due to "Error: ERROR 730 (LIM02): MutationState size is bigger than 
> maximum allowed number of bytes (state=LIM02,code=730)" even with auto commit 
> on.
> Ran it through a debugger, just a simple accounting bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4670) Extend CDH parcel compatibility to minor versions

2018-03-23 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado resolved PHOENIX-4670.
--
Resolution: Fixed

> Extend CDH parcel compatibility to minor versions
> -
>
> Key: PHOENIX-4670
> URL: https://issues.apache.org/jira/browse/PHOENIX-4670
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.2-cdh5.11.2
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
>
> In order to start supporting a wider range of CDH versions first step is 
> increasing parcel compatibility from fix ( cdh5.11.2 ) to minor ( cdh5.11 ) . 
> This requires a minor change in parcel.json file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4670) Extend CDH parcel compatibility to minor versions

2018-03-23 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-4670:


 Summary: Extend CDH parcel compatibility to minor versions
 Key: PHOENIX-4670
 URL: https://issues.apache.org/jira/browse/PHOENIX-4670
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.13.2-cdh5.11.2
Reporter: Pedro Boado
Assignee: Pedro Boado


In order to start supporting a wider range of CDH versions first step is 
increasing parcel compatibility from fix ( cdh5.11.2 ) to minor ( cdh5.11 ) . 
This requires a minor change in parcel.json file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-21 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408632#comment-16408632
 ] 

Pedro Boado commented on PHOENIX-4661:
--

Hi guys, I'm getting errors in branches 4.x-HBase-1.2 , 4.x-cdh5.11.2 ( I 
haven't checked others ) for ( at least ) SystemTablePermissionsIT and 
ChangePermissionsIT  after this commit  - same errors as in jenkins compilation 
- 

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch, 
> PHOENIX-4661_v2.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier 

[jira] [Commented] (PHOENIX-4640) Don't consider STATS_UPDATE_FREQ_MS_ATTRIB in TTL for server side cache

2018-03-11 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394623#comment-16394623
 ] 

Pedro Boado commented on PHOENIX-4640:
--

It fixed it. Thanks [~jamestaylor]

> Don't consider STATS_UPDATE_FREQ_MS_ATTRIB in TTL for server side cache
> ---
>
> Key: PHOENIX-4640
> URL: https://issues.apache.org/jira/browse/PHOENIX-4640
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4640_v1.patch
>
>
> Since stats have their own client-side cache, there's no need to consider 
> STATS_UPDATE_FREQ_MS_ATTRIB for the server-side TTL cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4640) Don't consider STATS_UPDATE_FREQ_MS_ATTRIB in TTL for server side cache

2018-03-11 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394578#comment-16394578
 ] 

Pedro Boado commented on PHOENIX-4640:
--

Hi [~jamestaylor], any ideas why this commit could have broken
TenantSpecificTablesDDLIT and ViewIT in 4.x-cdh5.11.2 ? Branches 4.x-HBase-1.2 
and 4.x-HBase-1.3 look ok. 
{code:java}
[ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
161.966 s <<< FAILURE! - in org.apache.phoenix.end2end.TenantSpecificTablesDDLIT
[ERROR] 
testAllDropParentTableWithCascadeWithMultipleTenantTablesAndIndexes(org.apache.phoenix.end2end.TenantSpecificTablesDDLIT)
  Time elapsed: 9.184 s  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: V_T46: Table qualifier must 
not be empty
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2031)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7874)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1989)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1971)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
Caused by: java.lang.IllegalArgumentException: Table qualifier must not be empty
at 
org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:179)
at 
org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:149)
at org.apache.hadoop.hbase.TableName.(TableName.java:322)
at 
org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:358)
at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:418)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1985)
... 9 more

at 
org.apache.phoenix.end2end.TenantSpecificTablesDDLIT.validateTenantViewIsDropped(TenantSpecificTablesDDLIT.java:411)
at 
org.apache.phoenix.end2end.TenantSpecificTablesDDLIT.testAllDropParentTableWithCascadeWithMultipleTenantTablesAndIndexes(TenantSpecificTablesDDLIT.java:378)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: V_T46: Table qualifier must 
not be empty
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2031)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7874)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1989)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1971)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
Caused by: java.lang.IllegalArgumentException: Table qualifier must not be empty
at 
org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:179)
at 
org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:149)
at org.apache.hadoop.hbase.TableName.(TableName.java:322)
at 
org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:358)
at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:418)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1985)
... 9 more

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
org.apache.hadoop.hbase.DoNotRetryIOException: V_T46: Table qualifier must 
not be empty
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
at 

[jira] [Resolved] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado resolved PHOENIX-4553.
--
Resolution: Fixed

Solved in commit #0aba3a9a

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: PHOENIX-4553-00.patch, hbase-master.log, 
> hbase-region.log, master-stderr.log, master-stdout.log, region-stderr.log, 
> region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4553:
-
Attachment: PHOENIX-4553-00.patch

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: PHOENIX-4553-00.patch, hbase-master.log, 
> hbase-region.log, master-stderr.log, master-stdout.log, region-stderr.log, 
> region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393874#comment-16393874
 ] 

Pedro Boado edited comment on PHOENIX-4553 at 3/10/18 12:43 AM:


[~neospyk]  I've come across this issue again and noticed that the classpath 
configured is actually wrong. Only 
{code:java}
/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-server.jar
{code}
should be in HBase classpath. After amending it logs are clear again.

Can you please confirm that by amending file 
/opt/cloudera/parcels/APACHE_PHOENIX/meta/phoenix_env.sh and replacing 

{code}
APPENDSTRING=`echo ${MYLIBDIR}/*.jar | sed 's/ /:/g'
{code}

by

{code}
APPENDSTRING=`echo ${MYLIBDIR}/phoenix-*-server.jar | sed 's/ /:/g'
{code}

solves the issue with the warnings appearing during RS startup?


was (Author: pboado):
[~neospyk]  I've come across this issue again and noticed that the classpath 
configured is actually wrong. Only 
{code:java}
/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-server.jar
{code}
should be in HBase classpath. After amending it logs are clear again.

Can you please confirm that by amending file 
/opt/cloudera/parcels/APACHE_PHOENIX/meta/phoenix_env.sh and replacing 

{code}
APPENDSTRING=`echo ${MYLIBDIR}/phoenix-*-server.jar | sed 's/ /:/g'
{code}

by



> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: hbase-master.log, hbase-region.log, master-stderr.log, 
> master-stdout.log, region-stderr.log, region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> 

[jira] [Comment Edited] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393874#comment-16393874
 ] 

Pedro Boado edited comment on PHOENIX-4553 at 3/10/18 12:42 AM:


[~neospyk]  I've come across this issue again and noticed that the classpath 
configured is actually wrong. Only 
{code:java}
/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-server.jar
{code}
should be in HBase classpath. After amending it logs are clear again.

Can you please confirm that by amending file 
/opt/cloudera/parcels/APACHE_PHOENIX/meta/phoenix_env.sh and replacing 

{code}
APPENDSTRING=`echo ${MYLIBDIR}/phoenix-*-server.jar | sed 's/ /:/g'
{code}

by




was (Author: pboado):
[~neospyk]  I've come across this issue again and noticed that the classpath 
configured is actually wrong. Only 
{code:java}
/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-server.jar
{code}
should be in HBase classpath. After amending it logs are clear again.

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: hbase-master.log, hbase-region.log, master-stderr.log, 
> master-stdout.log, region-stderr.log, region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393874#comment-16393874
 ] 

Pedro Boado commented on PHOENIX-4553:
--

[~neospyk]  I've come across this issue again and noticed that the classpath 
configured is actually wrong. Only 
{code:java}
/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-server.jar
{code}
should be in HBase classpath. After amending it logs are clear again.

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Priority: Minor
> Attachments: hbase-master.log, hbase-region.log, master-stderr.log, 
> master-stdout.log, region-stderr.log, region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado reassigned PHOENIX-4553:


Assignee: Pedro Boado

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: hbase-master.log, hbase-region.log, master-stderr.log, 
> master-stdout.log, region-stderr.log, region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4556) Sync branch 4.x-cdh5.11.2

2018-01-31 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado resolved PHOENIX-4556.
--
Resolution: Fixed

Committed 519cca954..9994059a0

> Sync branch 4.x-cdh5.11.2
> -
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4556-patch.tar.gz
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  and version up to 
> 4.14 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4556) Sync branch 4.x-cdh5.11.2

2018-01-31 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado closed PHOENIX-4556.


> Sync branch 4.x-cdh5.11.2
> -
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4556-patch.tar.gz
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  and version up to 
> 4.14 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4554) Sync branch 4.x-HBase-1.2

2018-01-31 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado closed PHOENIX-4554.


> Sync branch 4.x-HBase-1.2
> -
>
> Key: PHOENIX-4554
> URL: https://issues.apache.org/jira/browse/PHOENIX-4554
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: 
> 0001-PHOENIX-4437-Make-QueryPlan.getEstimatedBytesToScan-.patch, 
> 0002-PHOENIX-4488-Cache-config-parameters-for-MetaDataEnd.patch
>
>
> Syncing 4.x-HBase-1.2 with master (  two commits missing ) .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4554) Sync branch 4.x-HBase-1.2

2018-01-31 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado resolved PHOENIX-4554.
--
Resolution: Fixed

Done, 878a264e5..afe21dc72

> Sync branch 4.x-HBase-1.2
> -
>
> Key: PHOENIX-4554
> URL: https://issues.apache.org/jira/browse/PHOENIX-4554
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: 
> 0001-PHOENIX-4437-Make-QueryPlan.getEstimatedBytesToScan-.patch, 
> 0002-PHOENIX-4488-Cache-config-parameters-for-MetaDataEnd.patch
>
>
> Syncing 4.x-HBase-1.2 with master (  two commits missing ) .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2

2018-01-30 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345741#comment-16345741
 ] 

Pedro Boado commented on PHOENIX-4372:
--

[~Deoashish] please rise these kind of questions in the user mailing list.  
Thanks.

> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --
>
> Key: PHOENIX-4372
> URL: https://issues.apache.org/jira/browse/PHOENIX-4372
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.13.2-cdh5.11.2
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Minor
>  Labels: cdh
> Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372-v3.patch, 
> PHOENIX-4372-v4.patch, PHOENIX-4372-v5.patch, PHOENIX-4372-v6.patch, 
> PHOENIX-4372-v7.patch, PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of 
> Apache Phoenix 4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2

2018-01-30 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado closed PHOENIX-4372.


Released.

> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --
>
> Key: PHOENIX-4372
> URL: https://issues.apache.org/jira/browse/PHOENIX-4372
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.13.2-cdh5.11.2
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Minor
>  Labels: cdh
> Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372-v3.patch, 
> PHOENIX-4372-v4.patch, PHOENIX-4372-v5.patch, PHOENIX-4372-v6.patch, 
> PHOENIX-4372-v7.patch, PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of 
> Apache Phoenix 4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-01-26 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340752#comment-16340752
 ] 

Pedro Boado commented on PHOENIX-4553:
--

Are you running a quickstart VM? That is basically the test that I ran 
yesterday -logs are almost 100% the same-. 
Only difference is network addressing - my VM uses a bridged network - .
But HMaster is definitely running and waiting for a RS to connect to it  
{code}
2018-01-26 07:57:04,571 INFO org.apache.hadoop.hbase.master.ServerManager: 
Waiting for region servers count to settle; currently checked in 0, slept for 
324842 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, 
interval of 1500 ms.  
{code}
And RS is definitely running and waiting to connect to HMaster
{code}
2018-01-26 07:56:34,777 INFO 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization successful for hbase/quickstart.cloudera@CLOUDERA 
(auth:KERBEROS) for protocol=interface 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingInterface
 2018-01-26 07:56:35,616 WARN 
org.apache.hadoop.hbase.regionserver.HRegionServer: error telling master we are 
up com.google.protobuf.ServiceException: java.io.IOException: Call to 
quickstart.cloudera/172.23.0.2:6 failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=22, waitTime=10001, 
operationTimeout=1 expired. at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:240)
 at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
 at 
org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:8982)
 at 
org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2324)
 at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:922) 
at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Call 
to quickstart.cloudera/172.23.0.2:6 failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=22, waitTime=10001, 
operationTimeout=1 expired. at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:292)
 at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1273) at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
 ... 5 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call 
id=22, waitTime=10001, operationTimeout=1 expired. at 
org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:73) at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1247) ... 6 
more
{code}
I've run into similar issues before when running this VM with insufficient 
resources - process delays were too high to keep a stable cluster running -
Have you  noticed that the RS is starting way earlier than the master?
 I don't think this is an issue - unless someone else reports it as well -  . 
I'll keep the ticket open for a while just in case.     

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Priority: Minor
> Attachments: hbase-master.log, hbase-region.log, master-stderr.log, 
> master-stdout.log, region-stderr.log, region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> 

[jira] [Comment Edited] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-01-26 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340752#comment-16340752
 ] 

Pedro Boado edited comment on PHOENIX-4553 at 1/26/18 9:01 AM:
---

Are you running a quickstart VM? That is basically the test that I ran 
yesterday - logs are almost 100% the same -. 
Only difference is network addressing - my VM uses a bridged network - .
But HMaster is definitely running and waiting for a RS to connect to it  
{code}
2018-01-26 07:57:04,571 INFO org.apache.hadoop.hbase.master.ServerManager: 
Waiting for region servers count to settle; currently checked in 0, slept for 
324842 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, 
interval of 1500 ms.  
{code}
And RS is definitely running and waiting to connect to HMaster
{code}
2018-01-26 07:56:34,777 INFO 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization successful for hbase/quickstart.cloudera@CLOUDERA 
(auth:KERBEROS) for protocol=interface 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingInterface
 2018-01-26 07:56:35,616 WARN 
org.apache.hadoop.hbase.regionserver.HRegionServer: error telling master we are 
up com.google.protobuf.ServiceException: java.io.IOException: Call to 
quickstart.cloudera/172.23.0.2:6 failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=22, waitTime=10001, 
operationTimeout=1 expired. at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:240)
 at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
 at 
org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:8982)
 at 
org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2324)
 at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:922) 
at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Call 
to quickstart.cloudera/172.23.0.2:6 failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=22, waitTime=10001, 
operationTimeout=1 expired. at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:292)
 at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1273) at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
 ... 5 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call 
id=22, waitTime=10001, operationTimeout=1 expired. at 
org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:73) at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1247) ... 6 
more
{code}
I've run into similar issues before when running this VM with insufficient 
resources - process delays were too high to keep a stable cluster running -
Have you  noticed that the RS is starting way earlier than the master?
 I don't think this is an issue - unless someone else reports it as well -  . 
I'll keep the ticket open for a while just in case.     


was (Author: pboado):
Are you running a quickstart VM? That is basically the test that I ran 
yesterday -logs are almost 100% the same-. 
Only difference is network addressing - my VM uses a bridged network - .
But HMaster is definitely running and waiting for a RS to connect to it  
{code}
2018-01-26 07:57:04,571 INFO org.apache.hadoop.hbase.master.ServerManager: 
Waiting for region servers count to settle; currently checked in 0, slept for 
324842 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, 
interval of 1500 ms.  
{code}
And RS is definitely running and waiting to connect to HMaster
{code}
2018-01-26 07:56:34,777 INFO 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization successful for hbase/quickstart.cloudera@CLOUDERA 
(auth:KERBEROS) for protocol=interface 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingInterface
 2018-01-26 07:56:35,616 WARN 
org.apache.hadoop.hbase.regionserver.HRegionServer: error telling master we are 
up com.google.protobuf.ServiceException: java.io.IOException: Call to 
quickstart.cloudera/172.23.0.2:6 failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=22, waitTime=10001, 
operationTimeout=1 expired. at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:240)
 at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
 at 

[jira] [Comment Edited] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-01-25 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340251#comment-16340251
 ] 

Pedro Boado edited comment on PHOENIX-4553 at 1/25/18 11:36 PM:


- EDIT sorry, I misread the log - 

[~jamestaylor] I rechecked (again) our parcels over a fresh CDH install and I 
think they are fine. There must be some difference classpath wise with that 
specific installation. I cannot reproduce the issue. 

[~neospyk] can you please attach your full HBase Master classpath, please? It 
should be displayed at the very beggining of the process startup . By the way I 
don't see any warnings slf4j wise in my logs. 

EDIT: I've just noticed  you might be running a kerberised service. I've set up 
kerberos in my testing environment and both HBase Master & RS are starting just 
fine.  Any additional feature to be enabled in CDH to consider? 


This is my classpath

{code}
env:HBASE_CLASSPATH=/var/run/cloudera-scm-agent/process/147-hbase-MASTER:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/lib/hadoop/*:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/lib/hadoop/lib/*:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/bin/../lib/zookeeper/*:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/bin/../lib/zookeeper/lib/*:/usr/share/cmf/lib/plugins/tt-instrumentation-5.13.1.jar:/usr/share/cmf/lib/plugins/event-publish-5.13.1-shaded.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.12.1-shaded.jar:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-queryserver.jar:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-server.jar:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar
{code}


was (Author: pboado):
- EDIT sorry, I misread the log - 

[~jamestaylor] I rechecked (again) our parcels over a fresh CDH install and I 
think they are fine. There must be some difference classpath wise with that 
specific installation. I cannot reproduce the issue. 

[~neospyk] can you please attach your full HBase Master classpath, please? It 
should be displayed at the very beggining of the process startup . By the way I 
don't see any warnings slf4j wise in my logs. 

EDIT: I've just noticed  you might be running a kerberised service. I've set up 
kerberos in my testing environment and both HBase Master & RS are starting just 
fine.  Any additional feature to be enabled in CDH to consider? 

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Priority: Minor
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> 

[jira] [Comment Edited] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-01-25 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340251#comment-16340251
 ] 

Pedro Boado edited comment on PHOENIX-4553 at 1/25/18 11:33 PM:


- EDIT sorry, I misread the log - 

[~jamestaylor] I rechecked (again) our parcels over a fresh CDH install and I 
think they are fine. There must be some difference classpath wise with that 
specific installation. I cannot reproduce the issue. 

[~neospyk] can you please attach your full HBase Master classpath, please? It 
should be displayed at the very beggining of the process startup . By the way I 
don't see any warnings slf4j wise in my logs. 

EDIT: I've just noticed  you might be running a kerberised service. I've set up 
kerberos in my testing environment and both HBase Master & RS are starting just 
fine.  Any additional feature to be enabled in CDH to consider? 


was (Author: pboado):
- EDIT sorry, I misread the log - 

[~jamestaylor] I rechecked (again) our parcels over a fresh CDH install and I 
think they are fine. There must be some difference classpath wise with that 
specific installation. I cannot reproduce the issue. 

[~neospyk] can you please attach your full HBase Master classpath, please? It 
should be displayed at the very beggining of the process . By the way I don't 
see any warnings slf4j wise. 

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Priority: Minor
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at 

[jira] [Comment Edited] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-01-25 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340251#comment-16340251
 ] 

Pedro Boado edited comment on PHOENIX-4553 at 1/25/18 11:07 PM:


- EDIT sorry, I misread the log - 

[~jamestaylor] I rechecked (again) our parcels over a fresh CDH install and I 
think they are fine. There must be some difference classpath wise with that 
specific installation. I cannot reproduce the issue. 

[~neospyk] can you please attach your full HBase Master classpath, please? It 
should be displayed at the very beggining of the process . By the way I don't 
see any warnings slf4j wise. 


was (Author: pboado):
[~neospyk] I just noticed the folder in your log 
{{/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/ }}  . That folder doesn't 
match our parcel structure ( our parcel installs in 
APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0 ) .  Are you using our release parcels?  

[~jamestaylor] I rechecked (again) our parcels over a fresh CDH install and I 
think they are fine. 


> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Priority: Minor
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-01-25 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4553:
-
Priority: Minor  (was: Major)

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Priority: Minor
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-01-25 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340251#comment-16340251
 ] 

Pedro Boado commented on PHOENIX-4553:
--

[~neospyk] I just noticed the folder in your log 
{{/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/ }}  . That folder doesn't 
match our parcel structure ( our parcel installs in 
APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0 ) .  Are you using our release parcels?  

[~jamestaylor] I rechecked (again) our parcels over a fresh CDH install and I 
think they are fine. 


> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Priority: Major
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4556) Sync branch 4.x-cdh5.11.2

2018-01-24 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4556:
-
Description: Syncing 4.x-cdh5.11.2 with master - it was quite behind -  and 
version up to 4.14 .  (was: Syncing 4.x-cdh5.11.2 with master - it was quite 
behind -  .)

> Sync branch 4.x-cdh5.11.2
> -
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4556-patch.tar.gz
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  and version up to 
> 4.14 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4556) Sync branch 4.x-cdh5.11.2

2018-01-24 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338575#comment-16338575
 ] 

Pedro Boado commented on PHOENIX-4556:
--

Can anyone please decompress and {{git am}} the attached file? 

> Sync branch 4.x-cdh5.11.2
> -
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4556-patch.tar.gz
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4556) Sync branch 4.x-cdh5.11.2

2018-01-24 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4556:
-
Attachment: PHOENIX-4556-patch.tar.gz

> Sync branch 4.x-cdh5.11.2
> -
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4556-patch.tar.gz
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4556) Sync branch 4.x-cdh5.11.2

2018-01-24 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4556:
-
Summary: Sync branch 4.x-cdh5.11.2  (was: Sync branch 4.x-cdh-5.11.2)

> Sync branch 4.x-cdh5.11.2
> -
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >