Bump.. Can secondary index commiters/experts provide any insight into this? This is one of the feature that encouraged us to use phoenix. Imo, global secondary index should be handled as a inverted index table. So, i m unable to understand why its failing on region splits.
Sent from my iPhone > On Jan 6, 2016, at 11:14 PM, anil gupta <[email protected]> wrote: > > Hi All, > > I am using Phoenix4.4, i have created a global secondary in one table. I am > running MapReduce job with 20 reducers to load data into this table(maybe i m > doing 50 writes/second/reducer). Dataset is around 500K rows only. My > mapreduce job is failing due to this exception: > Caused by: org.apache.phoenix.execute.CommitException: java.sql.SQLException: > ERROR 2008 (INT10): Unable to find cached index metadata. ERROR 2008 > (INT10): ERROR 2008 (INT10): Unable to find cached index metadata. > key=-413539871950113484 > region=BI.TABLE,\x80M*\xBFr\xFF\x05\x1DW\x9A`\x00\x19\x0C\xC0\x00X8,1452147216490.83086e8ff78b30f6e6c49e2deba71d6d. > Index update failed > at org.apache.phoenix.execute.MutationState.commit(MutationState.java:444) > at > org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459) > at > org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456) > at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) > at > org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456) > at > org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:84) > ... 14 more > > It seems like i am hitting > https://issues.apache.org/jira/browse/PHOENIX-1718, but i dont have heavy > write or read load like wuchengzhi. I haven't dont any tweaking in > Phoenix/HBase conf yet. > > What is the root cause of this error? What are the recommended changes in > conf for this? > -- > Thanks & Regards, > Anil Gupta
