Re: FilterList passing RowFilter and QualifierFilter

2017-01-13 Thread Prahalad kothwal
It works fine when I apply rowfilter and qualifier filter and pass one
rowid and one qualifier .

I am  passing more than one rowid's  and one or more qualifiers and it
doesn't seem to work .

with MUST_PASS_ALL. it doesn't return anything, but with MUST_PASS_ONE it
returns incorrect results.



On Fri, Jan 13, 2017 at 11:34 PM, Ted Yu  wrote:

> According to your description, MUST_PASS_ONE should not be used.
>
> Please use MUST_PASS_ALL.
>
> Cheers
>
> On Fri, Jan 13, 2017 at 10:02 AM, Prahalad kothwal 
> wrote:
>
> > Yes through FilterList
> >
> > FilterList filterList = new FilterList(FilterList.
> Operator.MUST_PASS_ONE);
> > for(String rowid : rowIds){
> >
> >   rowFilter = new RowFilter(CompareFilter.CompareOp.EQUAL, new
> > BinaryComparator(Bytes.toBytes(rowid)));
> > filterList.addFilter(rowFilter);
> > filter = new QualifierFilter(CompareFilter.CompareOp.EQUAL, new
> > BinaryComparator(Bytes.toBytes(qualifier)) );
> > filterList.addFilter(filter);
> > scan.setFilter(filterList);
> >
> > ResultScanner results = table.getScanner(scan);
> > for(Result result : results){
> > List cells = result.listCells();
> >
> > For what I can see its not limiting to the rowids I have passed and
> > returning  columns that was not passed in the Qualifierfilter.
> >
> > Thanks,
> > Prahalad
> >
> >
> >
> > On Fri, Jan 13, 2017 at 8:33 PM, Ted Yu  wrote:
> >
> > > Can you illustrate how the two filters were combined (I assume through
> > > FilterList) ?
> > >
> > > I think the order of applying the filters should be RowFilter followed
> by
> > > QualifierFilter.
> > >
> > > Cheers
> > >
> > > On Fri, Jan 13, 2017 at 6:55 AM, Prahalad kothwal <
> kothwal...@gmail.com>
> > > wrote:
> > >
> > > > Hi ,
> > > >
> > > > Can I pass both RowFilter and QualifierFilter and expect HBase to
> > filter
> > > on
> > > > both rowkey and Qualifier ?
> > > >
> > > > I tried applying both filters but I am not getting accurate results.
> > > >
> > > >
> > > > Any help is appreciated .
> > > >
> > > > Thanks,
> > > > Prahalad
> > > >
> > >
> >
>


Re: HBase major compaction schedule

2017-01-13 Thread Ted Yu
Please see bullet #7 in
http://hbase.apache.org/book.html#compaction.ratiobasedcompactionpolicy.algorithm

Search for 'hbase.hstore.compaction.ratio.offpeak' and you will see related
config parameters.

On Fri, Jan 13, 2017 at 8:43 PM, spats  wrote:

> Thanks Ted,
>
> Yes reducing jitter value should help running major compactions only during
> certain range & not spill over whole week. But still problem is major
> compaction can run on any day, irrespective of cluster load.
>
> Yes, cluster load will be less on weekends so was exploring if there is way
> to run major compactions on weekends. We can disable compaction schedule
> but
> then we have to run schedule on all tables & all cluster, exploring if
> there
> is any easier way to do it.
>
>
>
> --
> View this message in context: http://apache-hbase.679495.n3.
> nabble.com/HBase-major-compaction-schedule-tp4085632p4085636.html
> Sent from the HBase User mailing list archive at Nabble.com.
>


Re: HBase major compaction schedule

2017-01-13 Thread spats
Thanks Ted,

Yes reducing jitter value should help running major compactions only during
certain range & not spill over whole week. But still problem is major
compaction can run on any day, irrespective of cluster load.

Yes, cluster load will be less on weekends so was exploring if there is way
to run major compactions on weekends. We can disable compaction schedule but
then we have to run schedule on all tables & all cluster, exploring if there
is any easier way to do it.



--
View this message in context: 
http://apache-hbase.679495.n3.nabble.com/HBase-major-compaction-schedule-tp4085632p4085636.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: HBase major compaction schedule

2017-01-13 Thread Ted Yu
For #1, see the following config:

hbase.hregion.majorcompaction.jitter
0.50
A multiplier applied to hbase.hregion.majorcompaction to
cause compaction to occur
  a given amount of time either side of hbase.hregion.majorcompaction.
The smaller the number,
  the closer the compactions will happen to the
hbase.hregion.majorcompaction
  interval.

For #2, can I ask the reason for running on specific day ?
Is it because traffic to cluster is known to dip on that day ?

On Fri, Jan 13, 2017 at 6:33 PM, spats  wrote:

> Few questions regarding major compaction, would be great if someone can
> answer them
>
> 1. If major compaction schedule is set to  say 7 days, on which day (like
> sat, sun etc) major compaction triggered?
> 2. Is there a way to configure major compaction on specific day? like sat
> 3. Running major compaction manually, will that affect major compaction
> schedule. e.g if i run major compaction manually on wed, is scheduled
> compaction get reset to run on next wed?
>
>
>
> --
> View this message in context: http://apache-hbase.679495.n3.
> nabble.com/HBase-major-compaction-schedule-tp4085632.html
> Sent from the HBase User mailing list archive at Nabble.com.
>


HBase major compaction schedule

2017-01-13 Thread spats
Few questions regarding major compaction, would be great if someone can
answer them

1. If major compaction schedule is set to  say 7 days, on which day (like
sat, sun etc) major compaction triggered?
2. Is there a way to configure major compaction on specific day? like sat 
3. Running major compaction manually, will that affect major compaction
schedule. e.g if i run major compaction manually on wed, is scheduled
compaction get reset to run on next wed?



--
View this message in context: 
http://apache-hbase.679495.n3.nabble.com/HBase-major-compaction-schedule-tp4085632.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Region comapction failed

2017-01-13 Thread Ted Yu
w.r.t. #2, I did a quick search for bloom related fixes.

I found HBASE-13123 but it was in 1.0.2

Planning to spend more time in the next few days.

On Fri, Jan 13, 2017 at 5:29 PM, Pankaj kr  wrote:

> Thanks Ted for replying.
>
> Actually issue happened in production environment and there are many
> HFiles in that store (can't get the file). As we don't log the file name
> which is corrupted, Is there anyway to get the corrupted  file name?
>
> Block encoding is "NONE", table schema has bloom filter as "ROW",
> compression type is "Snappy" and durability is SKIP_WAL.
>
>
> Regards,
> Pankaj
>
>
> -Original Message-
> From: Ted Yu [mailto:yuzhih...@gmail.com]
> Sent: Friday, January 13, 2017 10:30 PM
> To: d...@hbase.apache.org
> Cc: user@hbase.apache.org
> Subject: Re: Region comapction failed
>
> In the second case, the error happened when writing hfile. Can you track
> down the path of the new file so that further investigation can be done ?
>
> Does the table use any encoding ?
>
> Thanks
>
> > On Jan 13, 2017, at 2:47 AM, Pankaj kr  wrote:
> >
> > Hi,
> >
> > We met a weird issue in our production environment.
> >
> > Region compaction is always failing with  following errors,
> >
> > 1.
> > 2017-01-10 02:19:10,427 | ERROR | regionserver/RS-HOST/RS-IP:
> PORT-longCompactions-1483858654825 | Compaction failed Request =
> regionName=., storeName=XYZ, fileCount=6, fileSize=100.7 M (3.2 M, 20.8
> M, 15.1 M, 20.9 M, 21.0 M, 19.7 M), priority=-5, time=1747414906352088 |
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$
> CompactionRunner.doCompaction(CompactSplitThread.java:562)
> > java.io.IOException: ScanWildcardColumnTracker.checkColumn ran into a
> column actually smaller than the previous column:  XXX
> >at org.apache.hadoop.hbase.regionserver.
> ScanWildcardColumnTracker.checkVersions(ScanWildcardColumnTracker.
> java:114)
> >at org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.
> match(ScanQueryMatcher.java:457)
> >at org.apache.hadoop.hbase.regionserver.StoreScanner.
> next(StoreScanner.java:551)
> >at org.apache.hadoop.hbase.regionserver.compactions.
> Compactor.performCompaction(Compactor.java:328)
> >at org.apache.hadoop.hbase.regionserver.compactions.
> DefaultCompactor.compact(DefaultCompactor.java:104)
> >at org.apache.hadoop.hbase.regionserver.
> DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.
> java:133)
> >at org.apache.hadoop.hbase.regionserver.HStore.compact(
> HStore.java:1243)
> >at org.apache.hadoop.hbase.regionserver.HRegion.compact(
> HRegion.java:1895)
> >at org.apache.hadoop.hbase.regionserver.
> CompactSplitThread$CompactionRunner.doCompaction(
> CompactSplitThread.java:546)
> >at org.apache.hadoop.hbase.regionserver.
> CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:583)
> >at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> >at java.util.concurrent.ThreadPoolExecuto
> >
> > 2.
> > 2017-01-10 02:33:53,009 | ERROR | regionserver/RS-HOST/RS-IP:
> PORT-longCompactions-1483686810953 | Compaction failed Request =
> regionName=YY, storeName=ABC, fileCount=6, fileSize=125.3 M (20.9 M,
> 20.9 M, 20.9 M, 20.9 M, 20.9 M, 20.9 M), priority=-68,
> time=1748294500157323 | org.apache.hadoop.hbase.regionserver.
> CompactSplitThread$CompactionRunner.doCompaction(
> CompactSplitThread.java:562)
> > java.io.IOException: Non-increasing Bloom keys: XX
> after 
> >at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.
> appendGeneralBloomfilter(StoreFile.java:911)
> >at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.
> append(StoreFile.java:947)
> >at org.apache.hadoop.hbase.regionserver.compactions.
> Compactor.performCompaction(Compactor.java:337)
> >at org.apache.hadoop.hbase.regionserver.compactions.
> DefaultCompactor.compact(DefaultCompactor.java:104)
> >at org.apache.hadoop.hbase.regionserver.
> DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.
> java:133)
> >at org.apache.hadoop.hbase.regionserver.HStore.compact(
> HStore.java:1243)
> >at org.apache.hadoop.hbase.regionserver.HRegion.compact(
> HRegion.java:1895)
> >at org.apache.hadoop.hbase.regionserver.
> CompactSplitThread$CompactionRunner.doCompaction(
> CompactSplitThread.java:546)
> >at org.apache.hadoop.hbase.regionserver.
> CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:583)
> >at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> >at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> 

Re: HBase Thrift Client for C#: OutofMemoryException

2017-01-13 Thread jeff saremi
here you go:

https://issues.apache.org/jira/browse/HBASE-17467



From: Ted Yu 
Sent: Friday, January 13, 2017 4:02 PM
To: user@hbase.apache.org
Subject: Re: HBase Thrift Client for C#: OutofMemoryException

bq. i can create a pull request for them

That would be wonderful.

Please log a JIRA, polish the C# example and attach to the JIRA.

In hbase, we're not at the stage of reviewing / committing pull request yet.

On Fri, Jan 13, 2017 at 3:45 PM, jeff saremi  wrote:

> sorry Ted for wasting your time
>
> It happened that i was using the wrong port for this
>
> What a misleading error for an error so common! but that's Thrift
>
>
> On another note i now have converted the Democlient to from hbase-examples
> to C# and i have the generated files as well. If there's interest i can
> create a pull request for them
>
>
>
> 
> From: jeff saremi 
> Sent: Friday, January 13, 2017 2:11 PM
> To: user@hbase.apache.org
> Subject: Re: HBase Thrift Client for C#: OutofMemoryException
>
> Thanks Ted.
>
> I looked at this. We didn't know that a multipexing protocol existed until
> you mentioned it to us.
> We're using a stock thrift server that is shipped with hbase.
> If you perhaps point us to where we should be checking I'd be appreciative.
>
>
>
> 
> From: Ted Yu 
> Sent: Friday, January 13, 2017 1:34 PM
> To: user@hbase.apache.org
> Subject: Re: HBase Thrift Client for C#: OutofMemoryException
>
> I haven't touched C# for a decade.
>
> After a brief search, I found:
> http://stackoverflow.com/questions/17843749/apache-
[https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-i...@2.png?v=73d79a89bded]

Apache Thrift Client Run time issues in 
c#
stackoverflow.com
I am working on a client - server application written in C# that is built using 
the Apache THRIFT RPC framework. We have created several IDL files (.thrift 
files) with service definitions. These


> thrift-client-run-time-issues-in-c-sharp
> [https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-i...@2.png?v=
> 73d79a89bded] 17843749/apache-thrift-client-run-time-issues-in-c-sharp>
>
> Apache Thrift Client Run time issues in c# - Stack Overflow<
> http://stackoverflow.com/questions/17843749/apache-
> thrift-client-run-time-issues-in-c-sharp>
> stackoverflow.com
> I am working on a client - server application written in C# that is built
> using the Apache THRIFT RPC framework. We have created several IDL files
> (.thrift ...
>
>
>
> Can you take a look at the answer to see if it is relevant ?
>
> Cheers
>
> On Fri, Jan 13, 2017 at 11:10 AM, jeff saremi 
> wrote:
>
> > The result is the same. OutofMemoryException.
> >
> > I again ran my C++ client to make sure nothing wierd is going on server
> > side.
> > I found the thrift compiler here: http://www-us.apache.org/dist/
> Index of /dist - Apache Software Foundation >
> www-us.apache.org
> Apache Software Foundation Distribution Directory. The directories linked
> below contain current software releases from the Apache Software Foundation
> projects.
>
>
> Index of /dist - Apache Software Foundation >
> Index of /dist - Apache Software Foundation >
> www-us.apache.org
> Apache Software Foundation Distribution Directory. The directories linked
> below contain current software releases from the Apache Software Foundation
> projects.
>
>
> www-us.apache.org
> Apache Software Foundation Distribution Directory. The directories linked
> below contain current software releases from the Apache Software Foundation
> projects.
>
>
> > thrift/0.9.3/
> >
> > I regenerated all files and deleted all old ones.
> >
> > Here's a sample of a generated file for you to see that 0.9.3 is used:
> >
> >
> > /**
> >  * Autogenerated by Thrift Compiler (0.9.3)
> >  *
> >  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
> >  *  @generated
> >  */
> > using System;
> > using System.Collections;
> > using System.Collections.Generic;
> > using System.Text;
> > using System.IO;
> > using Thrift;
> > using Thrift.Collections;
> > using System.Runtime.Serialization;
> > using Thrift.Protocol;
> > using Thrift.Transport;
> >
> > public partial class Hbase {
> >   public interface Iface {
> >
> >
> >
> > 
> > From: jeff saremi 
> > Sent: Friday, January 13, 2017 10:39 AM
> > To: user@hbase.apache.org
> > Subject: Re: HBase Thrift Client for C#: OutofMemoryException
> >
> >
> > oh i see. sure i'll do that and report back.
> >
> >
> > 
> > From: Ted Yu 
> > 

Re: HBase Thrift Client for C#: OutofMemoryException

2017-01-13 Thread Ted Yu
bq. i can create a pull request for them

That would be wonderful.

Please log a JIRA, polish the C# example and attach to the JIRA.

In hbase, we're not at the stage of reviewing / committing pull request yet.

On Fri, Jan 13, 2017 at 3:45 PM, jeff saremi  wrote:

> sorry Ted for wasting your time
>
> It happened that i was using the wrong port for this
>
> What a misleading error for an error so common! but that's Thrift
>
>
> On another note i now have converted the Democlient to from hbase-examples
> to C# and i have the generated files as well. If there's interest i can
> create a pull request for them
>
>
>
> 
> From: jeff saremi 
> Sent: Friday, January 13, 2017 2:11 PM
> To: user@hbase.apache.org
> Subject: Re: HBase Thrift Client for C#: OutofMemoryException
>
> Thanks Ted.
>
> I looked at this. We didn't know that a multipexing protocol existed until
> you mentioned it to us.
> We're using a stock thrift server that is shipped with hbase.
> If you perhaps point us to where we should be checking I'd be appreciative.
>
>
>
> 
> From: Ted Yu 
> Sent: Friday, January 13, 2017 1:34 PM
> To: user@hbase.apache.org
> Subject: Re: HBase Thrift Client for C#: OutofMemoryException
>
> I haven't touched C# for a decade.
>
> After a brief search, I found:
> http://stackoverflow.com/questions/17843749/apache-
> thrift-client-run-time-issues-in-c-sharp
> [https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-i...@2.png?v=
> 73d79a89bded] 17843749/apache-thrift-client-run-time-issues-in-c-sharp>
>
> Apache Thrift Client Run time issues in c# - Stack Overflow<
> http://stackoverflow.com/questions/17843749/apache-
> thrift-client-run-time-issues-in-c-sharp>
> stackoverflow.com
> I am working on a client - server application written in C# that is built
> using the Apache THRIFT RPC framework. We have created several IDL files
> (.thrift ...
>
>
>
> Can you take a look at the answer to see if it is relevant ?
>
> Cheers
>
> On Fri, Jan 13, 2017 at 11:10 AM, jeff saremi 
> wrote:
>
> > The result is the same. OutofMemoryException.
> >
> > I again ran my C++ client to make sure nothing wierd is going on server
> > side.
> > I found the thrift compiler here: http://www-us.apache.org/dist/
> Index of /dist - Apache Software Foundation >
> www-us.apache.org
> Apache Software Foundation Distribution Directory. The directories linked
> below contain current software releases from the Apache Software Foundation
> projects.
>
>
> Index of /dist - Apache Software Foundation >
> Index of /dist - Apache Software Foundation >
> www-us.apache.org
> Apache Software Foundation Distribution Directory. The directories linked
> below contain current software releases from the Apache Software Foundation
> projects.
>
>
> www-us.apache.org
> Apache Software Foundation Distribution Directory. The directories linked
> below contain current software releases from the Apache Software Foundation
> projects.
>
>
> > thrift/0.9.3/
> >
> > I regenerated all files and deleted all old ones.
> >
> > Here's a sample of a generated file for you to see that 0.9.3 is used:
> >
> >
> > /**
> >  * Autogenerated by Thrift Compiler (0.9.3)
> >  *
> >  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
> >  *  @generated
> >  */
> > using System;
> > using System.Collections;
> > using System.Collections.Generic;
> > using System.Text;
> > using System.IO;
> > using Thrift;
> > using Thrift.Collections;
> > using System.Runtime.Serialization;
> > using Thrift.Protocol;
> > using Thrift.Transport;
> >
> > public partial class Hbase {
> >   public interface Iface {
> >
> >
> >
> > 
> > From: jeff saremi 
> > Sent: Friday, January 13, 2017 10:39 AM
> > To: user@hbase.apache.org
> > Subject: Re: HBase Thrift Client for C#: OutofMemoryException
> >
> >
> > oh i see. sure i'll do that and report back.
> >
> >
> > 
> > From: Ted Yu 
> > Sent: Friday, January 13, 2017 10:32 AM
> > To: user@hbase.apache.org
> > Subject: Re: HBase Thrift Client for C#: OutofMemoryException
> >
> > I am not sure about compatibility between thrift 0.10.0 and 0.9.3
> >
> > Is it possible for you to locate 0.9.3 thrift compiler and try again ?
> >
> > On Fri, Jan 13, 2017 at 10:27 AM, jeff saremi 
> > wrote:
> >
> > > I used the following thrift compiler. I did not see any mentions of
> > > versions.
> > > http://www.apache.org/dyn/closer.cgi?path=/thrift/0.10.
> Apache Download Mirrors dyn/closer.cgi?path=/thrift/0.10>
> www.apache.org
> Home page of The Apache Software Foundation
>
>
> Apache Download 

Re: HBase Thrift Client for C#: OutofMemoryException

2017-01-13 Thread jeff saremi
sorry Ted for wasting your time

It happened that i was using the wrong port for this

What a misleading error for an error so common! but that's Thrift


On another note i now have converted the Democlient to from hbase-examples to 
C# and i have the generated files as well. If there's interest i can create a 
pull request for them




From: jeff saremi 
Sent: Friday, January 13, 2017 2:11 PM
To: user@hbase.apache.org
Subject: Re: HBase Thrift Client for C#: OutofMemoryException

Thanks Ted.

I looked at this. We didn't know that a multipexing protocol existed until you 
mentioned it to us.
We're using a stock thrift server that is shipped with hbase.
If you perhaps point us to where we should be checking I'd be appreciative.




From: Ted Yu 
Sent: Friday, January 13, 2017 1:34 PM
To: user@hbase.apache.org
Subject: Re: HBase Thrift Client for C#: OutofMemoryException

I haven't touched C# for a decade.

After a brief search, I found:
http://stackoverflow.com/questions/17843749/apache-thrift-client-run-time-issues-in-c-sharp
[https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-i...@2.png?v=73d79a89bded]

Apache Thrift Client Run time issues in c# - Stack 
Overflow
stackoverflow.com
I am working on a client - server application written in C# that is built using 
the Apache THRIFT RPC framework. We have created several IDL files (.thrift ...



Can you take a look at the answer to see if it is relevant ?

Cheers

On Fri, Jan 13, 2017 at 11:10 AM, jeff saremi 
wrote:

> The result is the same. OutofMemoryException.
>
> I again ran my C++ client to make sure nothing wierd is going on server
> side.
> I found the thrift compiler here: http://www-us.apache.org/dist/
Index of /dist - Apache Software Foundation
www-us.apache.org
Apache Software Foundation Distribution Directory. The directories linked below 
contain current software releases from the Apache Software Foundation projects.


Index of /dist - Apache Software Foundation
Index of /dist - Apache Software Foundation
www-us.apache.org
Apache Software Foundation Distribution Directory. The directories linked below 
contain current software releases from the Apache Software Foundation projects.


www-us.apache.org
Apache Software Foundation Distribution Directory. The directories linked below 
contain current software releases from the Apache Software Foundation projects.


> thrift/0.9.3/
>
> I regenerated all files and deleted all old ones.
>
> Here's a sample of a generated file for you to see that 0.9.3 is used:
>
>
> /**
>  * Autogenerated by Thrift Compiler (0.9.3)
>  *
>  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
>  *  @generated
>  */
> using System;
> using System.Collections;
> using System.Collections.Generic;
> using System.Text;
> using System.IO;
> using Thrift;
> using Thrift.Collections;
> using System.Runtime.Serialization;
> using Thrift.Protocol;
> using Thrift.Transport;
>
> public partial class Hbase {
>   public interface Iface {
>
>
>
> 
> From: jeff saremi 
> Sent: Friday, January 13, 2017 10:39 AM
> To: user@hbase.apache.org
> Subject: Re: HBase Thrift Client for C#: OutofMemoryException
>
>
> oh i see. sure i'll do that and report back.
>
>
> 
> From: Ted Yu 
> Sent: Friday, January 13, 2017 10:32 AM
> To: user@hbase.apache.org
> Subject: Re: HBase Thrift Client for C#: OutofMemoryException
>
> I am not sure about compatibility between thrift 0.10.0 and 0.9.3
>
> Is it possible for you to locate 0.9.3 thrift compiler and try again ?
>
> On Fri, Jan 13, 2017 at 10:27 AM, jeff saremi 
> wrote:
>
> > I used the following thrift compiler. I did not see any mentions of
> > versions.
> > http://www.apache.org/dyn/closer.cgi?path=/thrift/0.10.
Apache Download Mirrors
www.apache.org
Home page of The Apache Software Foundation


Apache Download Mirrors
Apache Download Mirrors
www.apache.org
Home page of The Apache Software Foundation


www.apache.org
Home page of The Apache Software Foundation


> 0/thrift-0.10.0.exe
> Apache Download Mirrors dyn/closer.cgi?path=/thrift/0.10.0/thrift-0.10.0.exe>
> www.apache.org
> Home page of The Apache Software Foundation
>
>
> >
> >
> > Here's the stack trace. I am running ANYCPU platform:
> >
> >
> 

Re: StochasticLoadBalancer questions

2017-01-13 Thread Timothy Brown
Regarding #1, my main concern is that if we poll the region load at a "bad"
time and get back an abnormally high or low value, the balancer could over
react. For example if your regions most recent readRequestsCount is 100 and
you've been seeing 5 for the last 9 times you polled, the "average"
outputted is 52.5 instead of 14.5. This could just be a temporary spike in
requests to a region making it seem much worse than it may be going forward
and cause a region to move when it is actually unnecessary.

On Fri, Jan 13, 2017 at 2:10 PM, Ted Yu  wrote:

> For #2, you're more than welcome to attach patch on the JIRA.
>
> For #1, last time I tried to trace which JIRA introduced the formula but
> ended up with one Elliott did which just moved that line of code.
> I can spend more time in the future on this.
>
> What downside have you observed for #1 ?
>
> Cheers
>
> On Fri, Jan 13, 2017 at 2:07 PM, Timothy Brown 
> wrote:
>
> > I tried it out on our staging cluster and saw that the total number of
> > requests per region server a bit more balanced with our current weights
> for
> > the read and write costs. I did not attempt to calculate the exact
> requests
> > per second but rather looked at a relative rate by averaging the increase
> > in reads and writes over the interval that the RegionLoad is currently
> > polled. This should have the same desired effect of balancing the number
> of
> > requests across the cluster. If you don't mind, I would like to take a
> stab
> > at the JIRA you've created.
> >
> > For #1, any idea if this is the desired behavior?
> >
> > Thanks,
> > Tim
> >
> > On Fri, Jan 13, 2017 at 10:27 AM, Ted Yu  wrote:
> >
> > > Logged HBASE-17462 for #2.
> > >
> > > FYI
> > >
> > > On Thu, Jan 12, 2017 at 8:49 AM, Ted Yu  wrote:
> > >
> > > > For #2, I think MemstoreSizeCostFunction belongs to the same category
> > if
> > > > we are to adopt moving average.
> > > >
> > > > Some factors to consider:
> > > >
> > > > The data structure used by StochasticLoadBalancer should be concise.
> > The
> > > > number of regions in a cluster can be expected to approach 1 million.
> > We
> > > > cannot afford to store long history of read / write requests in
> master.
> > > >
> > > > Efficiency of cost calculation should be high - there're many cost
> > > > functions the balancer goes through, it is expected for each cost
> > > function
> > > > to return quickly. Otherwise we would not come up with proper region
> > > > movement plan(s) in time.
> > > >
> > > > Cheers
> > > >
> > > > On Wed, Jan 11, 2017 at 5:51 PM, Ted Yu  wrote:
> > > >
> > > >> For #2, I think it makes sense to try out using request rates for
> cost
> > > >> calculation.
> > > >>
> > > >> If the experiment result turns out to be better, we can consider
> using
> > > >> such measure.
> > > >>
> > > >> Thanks
> > > >>
> > > >> On Wed, Jan 11, 2017 at 5:34 PM, Timothy Brown  >
> > > >> wrote:
> > > >>
> > > >>> Hi,
> > > >>>
> > > >>> I have a couple of questions about the StochasticLoadBalancer.
> > > >>>
> > > >>> 1) In CostFromRegionLoadFunction.getRegionLoadCost the cost is
> > weights
> > > >>> later samples of the RegionLoad more than previous ones. For
> example,
> > > >>> with
> > > >>> a queue size of 4 it would be (.5 * load1 + .25*load2 + .125*load3
> +
> > > >>> .125*load4). Is this the intended behavior?
> > > >>>
> > > >>> 2) Would it make more sense to calculate the ReadRequestCost and
> > > >>> WriteRequestCost as rates? Right now it looks like the cost is just
> > > based
> > > >>> off the total number of read/write requests a region has gotten
> over
> > > its
> > > >>> lifetime.
> > > >>>
> > > >>> -Tim
> > > >>>
> > > >>
> > > >>
> > > >
> > >
> >
>


Re: HBase Thrift Client for C#: OutofMemoryException

2017-01-13 Thread jeff saremi
Thanks Ted.

I looked at this. We didn't know that a multipexing protocol existed until you 
mentioned it to us.
We're using a stock thrift server that is shipped with hbase.
If you perhaps point us to where we should be checking I'd be appreciative.




From: Ted Yu 
Sent: Friday, January 13, 2017 1:34 PM
To: user@hbase.apache.org
Subject: Re: HBase Thrift Client for C#: OutofMemoryException

I haven't touched C# for a decade.

After a brief search, I found:
http://stackoverflow.com/questions/17843749/apache-thrift-client-run-time-issues-in-c-sharp

Can you take a look at the answer to see if it is relevant ?

Cheers

On Fri, Jan 13, 2017 at 11:10 AM, jeff saremi 
wrote:

> The result is the same. OutofMemoryException.
>
> I again ran my C++ client to make sure nothing wierd is going on server
> side.
> I found the thrift compiler here: http://www-us.apache.org/dist/
Index of /dist - Apache Software Foundation
www-us.apache.org
Apache Software Foundation Distribution Directory. The directories linked below 
contain current software releases from the Apache Software Foundation projects.


> thrift/0.9.3/
>
> I regenerated all files and deleted all old ones.
>
> Here's a sample of a generated file for you to see that 0.9.3 is used:
>
>
> /**
>  * Autogenerated by Thrift Compiler (0.9.3)
>  *
>  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
>  *  @generated
>  */
> using System;
> using System.Collections;
> using System.Collections.Generic;
> using System.Text;
> using System.IO;
> using Thrift;
> using Thrift.Collections;
> using System.Runtime.Serialization;
> using Thrift.Protocol;
> using Thrift.Transport;
>
> public partial class Hbase {
>   public interface Iface {
>
>
>
> 
> From: jeff saremi 
> Sent: Friday, January 13, 2017 10:39 AM
> To: user@hbase.apache.org
> Subject: Re: HBase Thrift Client for C#: OutofMemoryException
>
>
> oh i see. sure i'll do that and report back.
>
>
> 
> From: Ted Yu 
> Sent: Friday, January 13, 2017 10:32 AM
> To: user@hbase.apache.org
> Subject: Re: HBase Thrift Client for C#: OutofMemoryException
>
> I am not sure about compatibility between thrift 0.10.0 and 0.9.3
>
> Is it possible for you to locate 0.9.3 thrift compiler and try again ?
>
> On Fri, Jan 13, 2017 at 10:27 AM, jeff saremi 
> wrote:
>
> > I used the following thrift compiler. I did not see any mentions of
> > versions.
> > http://www.apache.org/dyn/closer.cgi?path=/thrift/0.10.
Apache Download Mirrors
www.apache.org
Home page of The Apache Software Foundation


> 0/thrift-0.10.0.exe
> Apache Download Mirrors dyn/closer.cgi?path=/thrift/0.10.0/thrift-0.10.0.exe>
> www.apache.org
> Home page of The Apache Software Foundation
>
>
> >
> >
> > Here's the stack trace. I am running ANYCPU platform:
> >
> >
> > private  string ReadStringBody(int size)
> > {
> > byte[] buf = new byte[size];
> >
> >
> > size = 1213486160
> >
> >
> >
> >at Thrift.Protocol.TBinaryProtocol.ReadStringBody(Int32 size) in
> > D:\repos\thrift\lib\csharp\src\Protocol\TBinaryProtocol.cs:line 383
> >at Thrift.Protocol.TBinaryProtocol.ReadMessageBegin() in
> > D:\repos\thrift\lib\csharp\src\Protocol\TBinaryProtocol.cs:line 239
> >at Hbase.Client.recv_getTableNames() in
> D:\Projects\HBaseThrift\Hbase.cs:line
> > 1418
> >at Hbase.Client.getTableNames() in D:\Projects\HBaseThrift\Hbase.
> cs:line
> > 1391
> >at DemoClient.Main(String[] args) in D:\Projects\HBaseThriftClient\
> DemoClient.cs:line
> > 97
> >at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly,
> > String[] args)
> >at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence
> > assemblySecurity, String[] args)
> >at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
> >at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
> >at System.Threading.ExecutionContext.RunInternal(ExecutionContext
> > executionContext, ContextCallback callback, Object state, Boolean
> > preserveSyncCtx)
> >at System.Threading.ExecutionContext.Run(ExecutionContext
> > executionContext, ContextCallback callback, Object state, Boolean
> > preserveSyncCtx)
> >at System.Threading.ExecutionContext.Run(ExecutionContext
> > executionContext, ContextCallback callback, Object state)
> >at System.Threading.ThreadHelper.ThreadStart()
> >
> >
> > 
> > From: Ted Yu 
> > Sent: Friday, January 13, 2017 10:00 AM
> > To: user@hbase.apache.org
> > Subject: Re: HBase Thrift Client for C#: OutofMemoryException
> >
> > Which thrift version did you use to generate c# code ?
> >
> > hbase 

Re: StochasticLoadBalancer questions

2017-01-13 Thread Ted Yu
For #2, you're more than welcome to attach patch on the JIRA.

For #1, last time I tried to trace which JIRA introduced the formula but
ended up with one Elliott did which just moved that line of code.
I can spend more time in the future on this.

What downside have you observed for #1 ?

Cheers

On Fri, Jan 13, 2017 at 2:07 PM, Timothy Brown  wrote:

> I tried it out on our staging cluster and saw that the total number of
> requests per region server a bit more balanced with our current weights for
> the read and write costs. I did not attempt to calculate the exact requests
> per second but rather looked at a relative rate by averaging the increase
> in reads and writes over the interval that the RegionLoad is currently
> polled. This should have the same desired effect of balancing the number of
> requests across the cluster. If you don't mind, I would like to take a stab
> at the JIRA you've created.
>
> For #1, any idea if this is the desired behavior?
>
> Thanks,
> Tim
>
> On Fri, Jan 13, 2017 at 10:27 AM, Ted Yu  wrote:
>
> > Logged HBASE-17462 for #2.
> >
> > FYI
> >
> > On Thu, Jan 12, 2017 at 8:49 AM, Ted Yu  wrote:
> >
> > > For #2, I think MemstoreSizeCostFunction belongs to the same category
> if
> > > we are to adopt moving average.
> > >
> > > Some factors to consider:
> > >
> > > The data structure used by StochasticLoadBalancer should be concise.
> The
> > > number of regions in a cluster can be expected to approach 1 million.
> We
> > > cannot afford to store long history of read / write requests in master.
> > >
> > > Efficiency of cost calculation should be high - there're many cost
> > > functions the balancer goes through, it is expected for each cost
> > function
> > > to return quickly. Otherwise we would not come up with proper region
> > > movement plan(s) in time.
> > >
> > > Cheers
> > >
> > > On Wed, Jan 11, 2017 at 5:51 PM, Ted Yu  wrote:
> > >
> > >> For #2, I think it makes sense to try out using request rates for cost
> > >> calculation.
> > >>
> > >> If the experiment result turns out to be better, we can consider using
> > >> such measure.
> > >>
> > >> Thanks
> > >>
> > >> On Wed, Jan 11, 2017 at 5:34 PM, Timothy Brown 
> > >> wrote:
> > >>
> > >>> Hi,
> > >>>
> > >>> I have a couple of questions about the StochasticLoadBalancer.
> > >>>
> > >>> 1) In CostFromRegionLoadFunction.getRegionLoadCost the cost is
> weights
> > >>> later samples of the RegionLoad more than previous ones. For example,
> > >>> with
> > >>> a queue size of 4 it would be (.5 * load1 + .25*load2 + .125*load3 +
> > >>> .125*load4). Is this the intended behavior?
> > >>>
> > >>> 2) Would it make more sense to calculate the ReadRequestCost and
> > >>> WriteRequestCost as rates? Right now it looks like the cost is just
> > based
> > >>> off the total number of read/write requests a region has gotten over
> > its
> > >>> lifetime.
> > >>>
> > >>> -Tim
> > >>>
> > >>
> > >>
> > >
> >
>


Re: HBase Thrift Client for C#: OutofMemoryException

2017-01-13 Thread Ted Yu
I haven't touched C# for a decade.

After a brief search, I found:
http://stackoverflow.com/questions/17843749/apache-thrift-client-run-time-issues-in-c-sharp

Can you take a look at the answer to see if it is relevant ?

Cheers

On Fri, Jan 13, 2017 at 11:10 AM, jeff saremi 
wrote:

> The result is the same. OutofMemoryException.
>
> I again ran my C++ client to make sure nothing wierd is going on server
> side.
> I found the thrift compiler here: http://www-us.apache.org/dist/
> thrift/0.9.3/
>
> I regenerated all files and deleted all old ones.
>
> Here's a sample of a generated file for you to see that 0.9.3 is used:
>
>
> /**
>  * Autogenerated by Thrift Compiler (0.9.3)
>  *
>  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
>  *  @generated
>  */
> using System;
> using System.Collections;
> using System.Collections.Generic;
> using System.Text;
> using System.IO;
> using Thrift;
> using Thrift.Collections;
> using System.Runtime.Serialization;
> using Thrift.Protocol;
> using Thrift.Transport;
>
> public partial class Hbase {
>   public interface Iface {
>
>
>
> 
> From: jeff saremi 
> Sent: Friday, January 13, 2017 10:39 AM
> To: user@hbase.apache.org
> Subject: Re: HBase Thrift Client for C#: OutofMemoryException
>
>
> oh i see. sure i'll do that and report back.
>
>
> 
> From: Ted Yu 
> Sent: Friday, January 13, 2017 10:32 AM
> To: user@hbase.apache.org
> Subject: Re: HBase Thrift Client for C#: OutofMemoryException
>
> I am not sure about compatibility between thrift 0.10.0 and 0.9.3
>
> Is it possible for you to locate 0.9.3 thrift compiler and try again ?
>
> On Fri, Jan 13, 2017 at 10:27 AM, jeff saremi 
> wrote:
>
> > I used the following thrift compiler. I did not see any mentions of
> > versions.
> > http://www.apache.org/dyn/closer.cgi?path=/thrift/0.10.
> 0/thrift-0.10.0.exe
> Apache Download Mirrors dyn/closer.cgi?path=/thrift/0.10.0/thrift-0.10.0.exe>
> www.apache.org
> Home page of The Apache Software Foundation
>
>
> >
> >
> > Here's the stack trace. I am running ANYCPU platform:
> >
> >
> > private  string ReadStringBody(int size)
> > {
> > byte[] buf = new byte[size];
> >
> >
> > size = 1213486160
> >
> >
> >
> >at Thrift.Protocol.TBinaryProtocol.ReadStringBody(Int32 size) in
> > D:\repos\thrift\lib\csharp\src\Protocol\TBinaryProtocol.cs:line 383
> >at Thrift.Protocol.TBinaryProtocol.ReadMessageBegin() in
> > D:\repos\thrift\lib\csharp\src\Protocol\TBinaryProtocol.cs:line 239
> >at Hbase.Client.recv_getTableNames() in
> D:\Projects\HBaseThrift\Hbase.cs:line
> > 1418
> >at Hbase.Client.getTableNames() in D:\Projects\HBaseThrift\Hbase.
> cs:line
> > 1391
> >at DemoClient.Main(String[] args) in D:\Projects\HBaseThriftClient\
> DemoClient.cs:line
> > 97
> >at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly,
> > String[] args)
> >at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence
> > assemblySecurity, String[] args)
> >at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
> >at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
> >at System.Threading.ExecutionContext.RunInternal(ExecutionContext
> > executionContext, ContextCallback callback, Object state, Boolean
> > preserveSyncCtx)
> >at System.Threading.ExecutionContext.Run(ExecutionContext
> > executionContext, ContextCallback callback, Object state, Boolean
> > preserveSyncCtx)
> >at System.Threading.ExecutionContext.Run(ExecutionContext
> > executionContext, ContextCallback callback, Object state)
> >at System.Threading.ThreadHelper.ThreadStart()
> >
> >
> > 
> > From: Ted Yu 
> > Sent: Friday, January 13, 2017 10:00 AM
> > To: user@hbase.apache.org
> > Subject: Re: HBase Thrift Client for C#: OutofMemoryException
> >
> > Which thrift version did you use to generate c# code ?
> >
> > hbase uses 0.9.3
> >
> > Can you pastebin the whole stack trace for the exception ?
> >
> > I assume you run your code on 64-bit machine.
> >
> > Cheers
> >
> > On Fri, Jan 13, 2017 at 9:53 AM, jeff saremi 
> > wrote:
> >
> > > I have cloned the latest thrift and hbase code. Used thrift generator
> to
> > > generate c# code from hbase-thrift\src\main\
> resources\org\apache\hadoop\
> > hbase\thrift.
> > > Then created a single VS solution with the generated code, the thrift
> lib
> > > for c# (thrift\lib\csharp\src\Thrift.csproj) and i also added a
> > > DemoClient (hbase-examples) converted from c++ to c#. When I run that I
> > > keep getting OutofMemoryException with not a lot of other useful
> > > information. I have done the same process for C++ and the democlient
> code
> > > from hbase-examples runs with no issues at all.
> > >
> > >
> > 

Re: HBase Thrift Client for C#: OutofMemoryException

2017-01-13 Thread Ted Yu
I am not sure about compatibility between thrift 0.10.0 and 0.9.3

Is it possible for you to locate 0.9.3 thrift compiler and try again ?

On Fri, Jan 13, 2017 at 10:27 AM, jeff saremi 
wrote:

> I used the following thrift compiler. I did not see any mentions of
> versions.
> http://www.apache.org/dyn/closer.cgi?path=/thrift/0.10.0/thrift-0.10.0.exe
>
>
> Here's the stack trace. I am running ANYCPU platform:
>
>
> private  string ReadStringBody(int size)
> {
> byte[] buf = new byte[size];
>
>
> size = 1213486160
>
>
>
>at Thrift.Protocol.TBinaryProtocol.ReadStringBody(Int32 size) in
> D:\repos\thrift\lib\csharp\src\Protocol\TBinaryProtocol.cs:line 383
>at Thrift.Protocol.TBinaryProtocol.ReadMessageBegin() in
> D:\repos\thrift\lib\csharp\src\Protocol\TBinaryProtocol.cs:line 239
>at Hbase.Client.recv_getTableNames() in 
> D:\Projects\HBaseThrift\Hbase.cs:line
> 1418
>at Hbase.Client.getTableNames() in D:\Projects\HBaseThrift\Hbase.cs:line
> 1391
>at DemoClient.Main(String[] args) in 
> D:\Projects\HBaseThriftClient\DemoClient.cs:line
> 97
>at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly,
> String[] args)
>at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence
> assemblySecurity, String[] args)
>at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
>at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
>at System.Threading.ExecutionContext.RunInternal(ExecutionContext
> executionContext, ContextCallback callback, Object state, Boolean
> preserveSyncCtx)
>at System.Threading.ExecutionContext.Run(ExecutionContext
> executionContext, ContextCallback callback, Object state, Boolean
> preserveSyncCtx)
>at System.Threading.ExecutionContext.Run(ExecutionContext
> executionContext, ContextCallback callback, Object state)
>at System.Threading.ThreadHelper.ThreadStart()
>
>
> 
> From: Ted Yu 
> Sent: Friday, January 13, 2017 10:00 AM
> To: user@hbase.apache.org
> Subject: Re: HBase Thrift Client for C#: OutofMemoryException
>
> Which thrift version did you use to generate c# code ?
>
> hbase uses 0.9.3
>
> Can you pastebin the whole stack trace for the exception ?
>
> I assume you run your code on 64-bit machine.
>
> Cheers
>
> On Fri, Jan 13, 2017 at 9:53 AM, jeff saremi 
> wrote:
>
> > I have cloned the latest thrift and hbase code. Used thrift generator to
> > generate c# code from hbase-thrift\src\main\resources\org\apache\hadoop\
> hbase\thrift.
> > Then created a single VS solution with the generated code, the thrift lib
> > for c# (thrift\lib\csharp\src\Thrift.csproj) and i also added a
> > DemoClient (hbase-examples) converted from c++ to c#. When I run that I
> > keep getting OutofMemoryException with not a lot of other useful
> > information. I have done the same process for C++ and the democlient code
> > from hbase-examples runs with no issues at all.
> >
> >
> > here's the client code:
> >
> > TTransport socket = new TSocket(args[0], Convert.ToInt32(args[1]));
> > TTransport transport = new TBufferedTransport((TStreamTransport)socket);
> > TProtocol protocol = new TBinaryProtocol(transport);
> > Hbase.Client client = new Hbase.Client(protocol);
> > List tables = client.getTableNames();
> >
> > The last line is where the exception is thrown. thanks
> >
>


Re: HBase Thrift Client for C#: OutofMemoryException

2017-01-13 Thread jeff saremi
I did a x64 compilation
I get a

{"Cannot read, Remote side has closed"}   Thrift.TException 
{Thrift.Transport.TTransportException}
with no further details




From: jeff saremi 
Sent: Friday, January 13, 2017 10:27 AM
To: user@hbase.apache.org
Subject: Re: HBase Thrift Client for C#: OutofMemoryException


I used the following thrift compiler. I did not see any mentions of versions.
http://www.apache.org/dyn/closer.cgi?path=/thrift/0.10.0/thrift-0.10.0.exe


Here's the stack trace. I am running ANYCPU platform:


private  string ReadStringBody(int size)
{
byte[] buf = new byte[size];


size = 1213486160



   at Thrift.Protocol.TBinaryProtocol.ReadStringBody(Int32 size) in 
D:\repos\thrift\lib\csharp\src\Protocol\TBinaryProtocol.cs:line 383
   at Thrift.Protocol.TBinaryProtocol.ReadMessageBegin() in 
D:\repos\thrift\lib\csharp\src\Protocol\TBinaryProtocol.cs:line 239
   at Hbase.Client.recv_getTableNames() in 
D:\Projects\HBaseThrift\Hbase.cs:line 1418
   at Hbase.Client.getTableNames() in D:\Projects\HBaseThrift\Hbase.cs:line 1391
   at DemoClient.Main(String[] args) in 
D:\Projects\HBaseThriftClient\DemoClient.cs:line 97
   at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] 
args)
   at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence 
assemblySecurity, String[] args)
   at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
   at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext 
executionContext, ContextCallback callback, Object state, Boolean 
preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, 
ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, 
ContextCallback callback, Object state)
   at System.Threading.ThreadHelper.ThreadStart()



From: Ted Yu 
Sent: Friday, January 13, 2017 10:00 AM
To: user@hbase.apache.org
Subject: Re: HBase Thrift Client for C#: OutofMemoryException

Which thrift version did you use to generate c# code ?

hbase uses 0.9.3

Can you pastebin the whole stack trace for the exception ?

I assume you run your code on 64-bit machine.

Cheers

On Fri, Jan 13, 2017 at 9:53 AM, jeff saremi  wrote:

> I have cloned the latest thrift and hbase code. Used thrift generator to
> generate c# code from 
> hbase-thrift\src\main\resources\org\apache\hadoop\hbase\thrift.
> Then created a single VS solution with the generated code, the thrift lib
> for c# (thrift\lib\csharp\src\Thrift.csproj) and i also added a
> DemoClient (hbase-examples) converted from c++ to c#. When I run that I
> keep getting OutofMemoryException with not a lot of other useful
> information. I have done the same process for C++ and the democlient code
> from hbase-examples runs with no issues at all.
>
>
> here's the client code:
>
> TTransport socket = new TSocket(args[0], Convert.ToInt32(args[1]));
> TTransport transport = new TBufferedTransport((TStreamTransport)socket);
> TProtocol protocol = new TBinaryProtocol(transport);
> Hbase.Client client = new Hbase.Client(protocol);
> List tables = client.getTableNames();
>
> The last line is where the exception is thrown. thanks
>


Re: HBase Thrift Client for C#: OutofMemoryException

2017-01-13 Thread jeff saremi
I used the following thrift compiler. I did not see any mentions of versions.
http://www.apache.org/dyn/closer.cgi?path=/thrift/0.10.0/thrift-0.10.0.exe


Here's the stack trace. I am running ANYCPU platform:


private  string ReadStringBody(int size)
{
byte[] buf = new byte[size];


size = 1213486160



   at Thrift.Protocol.TBinaryProtocol.ReadStringBody(Int32 size) in 
D:\repos\thrift\lib\csharp\src\Protocol\TBinaryProtocol.cs:line 383
   at Thrift.Protocol.TBinaryProtocol.ReadMessageBegin() in 
D:\repos\thrift\lib\csharp\src\Protocol\TBinaryProtocol.cs:line 239
   at Hbase.Client.recv_getTableNames() in 
D:\Projects\HBaseThrift\Hbase.cs:line 1418
   at Hbase.Client.getTableNames() in D:\Projects\HBaseThrift\Hbase.cs:line 1391
   at DemoClient.Main(String[] args) in 
D:\Projects\HBaseThriftClient\DemoClient.cs:line 97
   at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] 
args)
   at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence 
assemblySecurity, String[] args)
   at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
   at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext 
executionContext, ContextCallback callback, Object state, Boolean 
preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, 
ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, 
ContextCallback callback, Object state)
   at System.Threading.ThreadHelper.ThreadStart()



From: Ted Yu 
Sent: Friday, January 13, 2017 10:00 AM
To: user@hbase.apache.org
Subject: Re: HBase Thrift Client for C#: OutofMemoryException

Which thrift version did you use to generate c# code ?

hbase uses 0.9.3

Can you pastebin the whole stack trace for the exception ?

I assume you run your code on 64-bit machine.

Cheers

On Fri, Jan 13, 2017 at 9:53 AM, jeff saremi  wrote:

> I have cloned the latest thrift and hbase code. Used thrift generator to
> generate c# code from 
> hbase-thrift\src\main\resources\org\apache\hadoop\hbase\thrift.
> Then created a single VS solution with the generated code, the thrift lib
> for c# (thrift\lib\csharp\src\Thrift.csproj) and i also added a
> DemoClient (hbase-examples) converted from c++ to c#. When I run that I
> keep getting OutofMemoryException with not a lot of other useful
> information. I have done the same process for C++ and the democlient code
> from hbase-examples runs with no issues at all.
>
>
> here's the client code:
>
> TTransport socket = new TSocket(args[0], Convert.ToInt32(args[1]));
> TTransport transport = new TBufferedTransport((TStreamTransport)socket);
> TProtocol protocol = new TBinaryProtocol(transport);
> Hbase.Client client = new Hbase.Client(protocol);
> List tables = client.getTableNames();
>
> The last line is where the exception is thrown. thanks
>


Re: StochasticLoadBalancer questions

2017-01-13 Thread Ted Yu
Logged HBASE-17462 for #2.

FYI

On Thu, Jan 12, 2017 at 8:49 AM, Ted Yu  wrote:

> For #2, I think MemstoreSizeCostFunction belongs to the same category if
> we are to adopt moving average.
>
> Some factors to consider:
>
> The data structure used by StochasticLoadBalancer should be concise. The
> number of regions in a cluster can be expected to approach 1 million. We
> cannot afford to store long history of read / write requests in master.
>
> Efficiency of cost calculation should be high - there're many cost
> functions the balancer goes through, it is expected for each cost function
> to return quickly. Otherwise we would not come up with proper region
> movement plan(s) in time.
>
> Cheers
>
> On Wed, Jan 11, 2017 at 5:51 PM, Ted Yu  wrote:
>
>> For #2, I think it makes sense to try out using request rates for cost
>> calculation.
>>
>> If the experiment result turns out to be better, we can consider using
>> such measure.
>>
>> Thanks
>>
>> On Wed, Jan 11, 2017 at 5:34 PM, Timothy Brown 
>> wrote:
>>
>>> Hi,
>>>
>>> I have a couple of questions about the StochasticLoadBalancer.
>>>
>>> 1) In CostFromRegionLoadFunction.getRegionLoadCost the cost is weights
>>> later samples of the RegionLoad more than previous ones. For example,
>>> with
>>> a queue size of 4 it would be (.5 * load1 + .25*load2 + .125*load3 +
>>> .125*load4). Is this the intended behavior?
>>>
>>> 2) Would it make more sense to calculate the ReadRequestCost and
>>> WriteRequestCost as rates? Right now it looks like the cost is just based
>>> off the total number of read/write requests a region has gotten over its
>>> lifetime.
>>>
>>> -Tim
>>>
>>
>>
>


Re: HBase Thrift Client for C#: OutofMemoryException

2017-01-13 Thread Ted Yu
Which thrift version did you use to generate c# code ?

hbase uses 0.9.3

Can you pastebin the whole stack trace for the exception ?

I assume you run your code on 64-bit machine.

Cheers

On Fri, Jan 13, 2017 at 9:53 AM, jeff saremi  wrote:

> I have cloned the latest thrift and hbase code. Used thrift generator to
> generate c# code from 
> hbase-thrift\src\main\resources\org\apache\hadoop\hbase\thrift.
> Then created a single VS solution with the generated code, the thrift lib
> for c# (thrift\lib\csharp\src\Thrift.csproj) and i also added a
> DemoClient (hbase-examples) converted from c++ to c#. When I run that I
> keep getting OutofMemoryException with not a lot of other useful
> information. I have done the same process for C++ and the democlient code
> from hbase-examples runs with no issues at all.
>
>
> here's the client code:
>
> TTransport socket = new TSocket(args[0], Convert.ToInt32(args[1]));
> TTransport transport = new TBufferedTransport((TStreamTransport)socket);
> TProtocol protocol = new TBinaryProtocol(transport);
> Hbase.Client client = new Hbase.Client(protocol);
> List tables = client.getTableNames();
>
> The last line is where the exception is thrown. thanks
>


Re: FilterList passing RowFilter and QualifierFilter

2017-01-13 Thread Ted Yu
According to your description, MUST_PASS_ONE should not be used.

Please use MUST_PASS_ALL.

Cheers

On Fri, Jan 13, 2017 at 10:02 AM, Prahalad kothwal 
wrote:

> Yes through FilterList
>
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> for(String rowid : rowIds){
>
>   rowFilter = new RowFilter(CompareFilter.CompareOp.EQUAL, new
> BinaryComparator(Bytes.toBytes(rowid)));
> filterList.addFilter(rowFilter);
> filter = new QualifierFilter(CompareFilter.CompareOp.EQUAL, new
> BinaryComparator(Bytes.toBytes(qualifier)) );
> filterList.addFilter(filter);
> scan.setFilter(filterList);
>
> ResultScanner results = table.getScanner(scan);
> for(Result result : results){
> List cells = result.listCells();
>
> For what I can see its not limiting to the rowids I have passed and
> returning  columns that was not passed in the Qualifierfilter.
>
> Thanks,
> Prahalad
>
>
>
> On Fri, Jan 13, 2017 at 8:33 PM, Ted Yu  wrote:
>
> > Can you illustrate how the two filters were combined (I assume through
> > FilterList) ?
> >
> > I think the order of applying the filters should be RowFilter followed by
> > QualifierFilter.
> >
> > Cheers
> >
> > On Fri, Jan 13, 2017 at 6:55 AM, Prahalad kothwal 
> > wrote:
> >
> > > Hi ,
> > >
> > > Can I pass both RowFilter and QualifierFilter and expect HBase to
> filter
> > on
> > > both rowkey and Qualifier ?
> > >
> > > I tried applying both filters but I am not getting accurate results.
> > >
> > >
> > > Any help is appreciated .
> > >
> > > Thanks,
> > > Prahalad
> > >
> >
>


Re: FilterList passing RowFilter and QualifierFilter

2017-01-13 Thread Prahalad kothwal
Yes through FilterList

FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
for(String rowid : rowIds){

  rowFilter = new RowFilter(CompareFilter.CompareOp.EQUAL, new
BinaryComparator(Bytes.toBytes(rowid)));
filterList.addFilter(rowFilter);
filter = new QualifierFilter(CompareFilter.CompareOp.EQUAL, new
BinaryComparator(Bytes.toBytes(qualifier)) );
filterList.addFilter(filter);
scan.setFilter(filterList);

ResultScanner results = table.getScanner(scan);
for(Result result : results){
List cells = result.listCells();

For what I can see its not limiting to the rowids I have passed and
returning  columns that was not passed in the Qualifierfilter.

Thanks,
Prahalad



On Fri, Jan 13, 2017 at 8:33 PM, Ted Yu  wrote:

> Can you illustrate how the two filters were combined (I assume through
> FilterList) ?
>
> I think the order of applying the filters should be RowFilter followed by
> QualifierFilter.
>
> Cheers
>
> On Fri, Jan 13, 2017 at 6:55 AM, Prahalad kothwal 
> wrote:
>
> > Hi ,
> >
> > Can I pass both RowFilter and QualifierFilter and expect HBase to filter
> on
> > both rowkey and Qualifier ?
> >
> > I tried applying both filters but I am not getting accurate results.
> >
> >
> > Any help is appreciated .
> >
> > Thanks,
> > Prahalad
> >
>


HBase Thrift Client for C#: OutofMemoryException

2017-01-13 Thread jeff saremi
I have cloned the latest thrift and hbase code. Used thrift generator to 
generate c# code from 
hbase-thrift\src\main\resources\org\apache\hadoop\hbase\thrift. Then created a 
single VS solution with the generated code, the thrift lib for c# 
(thrift\lib\csharp\src\Thrift.csproj) and i also added a DemoClient 
(hbase-examples) converted from c++ to c#. When I run that I keep getting 
OutofMemoryException with not a lot of other useful information. I have done 
the same process for C++ and the democlient code from hbase-examples runs with 
no issues at all.


here's the client code:

TTransport socket = new TSocket(args[0], Convert.ToInt32(args[1]));
TTransport transport = new TBufferedTransport((TStreamTransport)socket);
TProtocol protocol = new TBinaryProtocol(transport);
Hbase.Client client = new Hbase.Client(protocol);
List tables = client.getTableNames();

The last line is where the exception is thrown. thanks


Re: SocketTimeoutException on regionservers

2017-01-13 Thread Tulasi Paradarami
Any comments?

On Thu, Jan 12, 2017 at 10:18 AM, Tulasi Paradarami <
tulasi.krishn...@gmail.com> wrote:

> Hi,
>
> I noticed that Regionservers are raising following exceptions
> intermittently that is manifesting itself as request timeouts on the client
> side. HDFS is in a healthy state and there are no corrupted blocks (from
> "hdfs fsck" results). Datanodes were not out of service when this error
> occurs and GC on datanodes is usually around 0.3sec.
>
> Also, when these exceptions occur, HDFS metric "Send Data Packet Blocked
> On Network Average Time" tends to go up.
>
> Here are the configured values for some of the relevant parameters:
> dfs.client.socket-timeout: 10s
> dfs.datanode.socket.write.timeout: 10s
> dfs.namenode.avoid.read.stale.datanode: true
> dfs.namenode.avoid.write.stale.datanode: true
> dfs.datanode.max.xcievers: 8192
>
> Any pointers towards what could be causing these exceptions is
> appreciated. Thanks.
>
> CDH 5.7.2
> HBase 1.2.0
>
> ---> Regionserver logs
>
> 2017-01-11 19:19:04,940 WARN  [PriorityRpcServer.handler=3,queue=1,port=60020]
> hdfs.BlockReaderFactory: I/O error constructing remote block reader.
> java.net.SocketTimeoutException: 1 millis timeout while waiting for
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
> local=/datanode3:27094 remote=/datanode2:50010]
> at org.apache.hadoop.net.SocketIOWithTimeout.doIO(
> SocketIOWithTimeout.java:164)
> ...
>
> 2017-01-11 19:19:04,995 WARN  
> [PriorityRpcServer.handler=11,queue=1,port=60020]
> hdfs.DFSClient: Connection failure: Failed to connect to /datanode2:50010
> for file /hbase/data/default//ec9ca
> java.net.SocketTimeoutException: 1 millis timeout while waiting for
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
> local=/datanode3:27107 remote=/datanode2:50010]
> at org.apache.hadoop.net.SocketIOWithTimeout.doIO(
> SocketIOWithTimeout.java:164)
> at org.apache.hadoop.net.SocketInputStream.read(
> SocketInputStream.java:161)
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.
> readChannelFully(PacketReceiver.java:258)
> at org.apache.hadoop.hdfs.protocol.datatransfer.
> PacketReceiver.doReadFully(PacketReceiver.java:209)
> at org.apache.hadoop.hdfs.protocol.datatransfer.
> PacketReceiver.doRead(PacketReceiver.java:171)
> at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.
> receiveNextPacket(PacketReceiver.java:102)
> at org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(
> RemoteBlockReader2.java:207)
> at org.apache.hadoop.hdfs.RemoteBlockReader2.read(
> RemoteBlockReader2.java:156)
> at org.apache.hadoop.hdfs.BlockReaderUtil.readAll(
> BlockReaderUtil.java:32)
> at org.apache.hadoop.hdfs.RemoteBlockReader2.readAll(
> RemoteBlockReader2.java:386)
> at org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(
> DFSInputStream.java:1193)
> at org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(
> DFSInputStream.java:1112)
> at org.apache.hadoop.hdfs.DFSInputStream.pread(
> DFSInputStream.java:1473)
> at org.apache.hadoop.hdfs.DFSInputStream.read(
> DFSInputStream.java:1432)
> at org.apache.hadoop.fs.FSDataInputStream.read(
> FSDataInputStream.java:89)
> at org.apache.hadoop.hbase.io.hfile.HFileBlock.
> positionalReadWithExtra(HFileBlock.java:752)
> at org.apache.hadoop.hbase.io.hfile.HFileBlock$
> AbstractFSReader.readAtOffset(HFileBlock.java:1448)
> at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.
> readBlockDataInternal(HFileBlock.java:1648)
> at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.
> readBlockData(HFileBlock.java:1532)
> at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(
> HFileReaderV2.java:445)
> at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$
> BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:261)
> at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$
> AbstractScannerV2.seekTo(HFileReaderV2.java:642)
> at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$
> AbstractScannerV2.reseekTo(HFileReaderV2.java:622)
> at org.apache.hadoop.hbase.regionserver.StoreFileScanner.
> reseekAtOrAfter(StoreFileScanner.java:314)
> at org.apache.hadoop.hbase.regionserver.StoreFileScanner.
> reseek(StoreFileScanner.java:226)
> at org.apache.hadoop.hbase.regionserver.StoreFileScanner.
> enforceSeek(StoreFileScanner.java:437)
> at org.apache.hadoop.hbase.regionserver.KeyValueHeap.
> pollRealKV(KeyValueHeap.java:340)
> at org.apache.hadoop.hbase.regionserver.KeyValueHeap.
> generalizedSeek(KeyValueHeap.java:296)
> at org.apache.hadoop.hbase.regionserver.KeyValueHeap.
> requestSeek(KeyValueHeap.java:261)
> at org.apache.hadoop.hbase.regionserver.StoreScanner.
> reseek(StoreScanner.java:806)
> at 

Re: FilterList passing RowFilter and QualifierFilter

2017-01-13 Thread Ted Yu
Can you illustrate how the two filters were combined (I assume through
FilterList) ?

I think the order of applying the filters should be RowFilter followed by
QualifierFilter.

Cheers

On Fri, Jan 13, 2017 at 6:55 AM, Prahalad kothwal 
wrote:

> Hi ,
>
> Can I pass both RowFilter and QualifierFilter and expect HBase to filter on
> both rowkey and Qualifier ?
>
> I tried applying both filters but I am not getting accurate results.
>
>
> Any help is appreciated .
>
> Thanks,
> Prahalad
>


FilterList passing RowFilter and QualifierFilter

2017-01-13 Thread Prahalad kothwal
Hi ,

Can I pass both RowFilter and QualifierFilter and expect HBase to filter on
both rowkey and Qualifier ?

I tried applying both filters but I am not getting accurate results.


Any help is appreciated .

Thanks,
Prahalad


Re: Region comapction failed

2017-01-13 Thread Ted Yu
In the second case, the error happened when writing hfile. Can you track down 
the path of the new file so that further investigation can be done ?

Does the table use any encoding ?

Thanks

> On Jan 13, 2017, at 2:47 AM, Pankaj kr  wrote:
> 
> Hi,
> 
> We met a weird issue in our production environment.
> 
> Region compaction is always failing with  following errors,
> 
> 1.
> 2017-01-10 02:19:10,427 | ERROR | 
> regionserver/RS-HOST/RS-IP:PORT-longCompactions-1483858654825 | Compaction 
> failed Request = regionName=., storeName=XYZ, fileCount=6, fileSize=100.7 
> M (3.2 M, 20.8 M, 15.1 M, 20.9 M, 21.0 M, 19.7 M), priority=-5, 
> time=1747414906352088 | 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:562)
> java.io.IOException: ScanWildcardColumnTracker.checkColumn ran into a column 
> actually smaller than the previous column:  XXX
>at 
> org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker.checkVersions(ScanWildcardColumnTracker.java:114)
>at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:457)
>at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:551)
>at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:328)
>at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:104)
>at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:133)
>at 
> org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1243)
>at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1895)
>at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:546)
>at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:583)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>at java.util.concurrent.ThreadPoolExecuto
> 
> 2.
> 2017-01-10 02:33:53,009 | ERROR | 
> regionserver/RS-HOST/RS-IP:PORT-longCompactions-1483686810953 | Compaction 
> failed Request = regionName=YY, storeName=ABC, fileCount=6, 
> fileSize=125.3 M (20.9 M, 20.9 M, 20.9 M, 20.9 M, 20.9 M, 20.9 M), 
> priority=-68, time=1748294500157323 | 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:562)
> java.io.IOException: Non-increasing Bloom keys: XX after 
> 
>at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:911)
>at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:947)
>at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:337)
>at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:104)
>at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:133)
>at 
> org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1243)
>at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1895)
>at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:546)
>at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:583)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>at java.lang.Thread.run(Thread.java:745)
> 
> HBase version : 1.0.2
> 
> We have verified all the HFiles in the store using HFilePrettyPrinter with 
> "k" (checkrow), all report is normal. Full scan is also successful.
> We don't have the access to the actual data and may be customer wont agree to 
>  share that .
> 
> Have anyone faced this issue, any pointers will be much appreciated.
> 
> Thanks & Regards,
> Pankaj


Region comapction failed

2017-01-13 Thread Pankaj kr
Hi,

We met a weird issue in our production environment.

Region compaction is always failing with  following errors,

1.
2017-01-10 02:19:10,427 | ERROR | 
regionserver/RS-HOST/RS-IP:PORT-longCompactions-1483858654825 | Compaction 
failed Request = regionName=., storeName=XYZ, fileCount=6, fileSize=100.7 M 
(3.2 M, 20.8 M, 15.1 M, 20.9 M, 21.0 M, 19.7 M), priority=-5, 
time=1747414906352088 | 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:562)
java.io.IOException: ScanWildcardColumnTracker.checkColumn ran into a column 
actually smaller than the previous column:  XXX
at 
org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker.checkVersions(ScanWildcardColumnTracker.java:114)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:457)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:551)
at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:328)
at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:104)
at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:133)
at 
org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1243)
at 
org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1895)
at 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:546)
at 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:583)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecuto

2.
2017-01-10 02:33:53,009 | ERROR | 
regionserver/RS-HOST/RS-IP:PORT-longCompactions-1483686810953 | Compaction 
failed Request = regionName=YY, storeName=ABC, fileCount=6, fileSize=125.3 
M (20.9 M, 20.9 M, 20.9 M, 20.9 M, 20.9 M, 20.9 M), priority=-68, 
time=1748294500157323 | 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:562)
java.io.IOException: Non-increasing Bloom keys: XX after 

at 
org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:911)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:947)
at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:337)
at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:104)
at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:133)
at 
org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1243)
at 
org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1895)
at 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:546)
at 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:583)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

HBase version : 1.0.2

We have verified all the HFiles in the store using HFilePrettyPrinter with "k" 
(checkrow), all report is normal. Full scan is also successful.
We don't have the access to the actual data and may be customer wont agree to  
share that .

Have anyone faced this issue, any pointers will be much appreciated.

Thanks & Regards,
Pankaj