Hi Siva, I haven't tried this yet. Just a guess, but maybe rebuilding the stats table after load will resolve this. Can you give it a try?
-n On Thursday, June 4, 2015, Siva <[email protected]> wrote: > Hi Everyone, > > Facing strange issue in phoenix after bulk loading data in HBase. When we > query count(*) on phoenix view encountered the below error, regular select > is working fine. > > java.lang.IllegalStateException: Expected single, aggregated KeyValue from > coprocessor, but instead received > keyvalues={lmalpinevancouver100~262/cf:chg_dt/1433442509439/Put/vlen=26/mvcc=0/value=2015-06-04 > 11:07:11.000000, > > lmalpinevancouver100~262/cf:dbname/1433442509439/Put/vlen=17/mvcc=0/value=lmalpinevancouver, > > lmalpinevancouver100~262/cf:fieldid/1433442509439/Put/vlen=3/mvcc=0/value=100, > > lmalpinevancouver100~262/cf:leadid/1433442509439/Put/vlen=3/mvcc=0/value=262, > > lmalpinevancouver100~262/cf:value/1433442509439/Put/vlen=3/mvcc=0/value=1st} > . Ensure aggregating coprocessors are loaded correctly on server > at > org.apache.phoenix.util.TupleUtil.getAggregateValue(TupleUtil.java:88) > at > org.apache.phoenix.expression.aggregator.ClientAggregators.aggregate(ClientAggregators.java:54) > at > org.apache.phoenix.iterate.GroupedAggregatingResultIterator.next(GroupedAggregatingResultIterator.java:75) > at > org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39) > at > org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:739) > at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2429) > at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2074) > at sqlline.SqlLine.print(SqlLine.java:1735) > at sqlline.SqlLine$Commands.execute(SqlLine.java:3683) > at sqlline.SqlLine$Commands.sql(SqlLine.java:3584) > at sqlline.SqlLine.dispatch(SqlLine.java:821) > at sqlline.SqlLine.begin(SqlLine.java:699) > at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441) > at sqlline.SqlLine.main(SqlLine.java:424) > > > When I drop and recreate the view, it works fine. Did anyone face similar > issue? > > Thanks, > Siva. >
