Re: HBase 2.0.1 with Hadoop 2.8.4 causes NoSuchMethodException

2018-07-02 Thread Duo Zhang
OK it is HDFS-12574, it has also been ported to 2.8.4. Let's
revive HBASE-20244.

2018-07-03 9:07 GMT+08:00 张铎(Duo Zhang) :

> I think it is fine to just use the original hadoop jars in HBase-2.0.1 to
> communicate with HDFS-2.8.4 or above?
>
> The async wal has hacked into the internal of DFSClient so it will be
> easily broken when HDFS upgraded.
>
> I can take a look at the 2.8.4 problem but for 3.x, there is no production
> ready release yet so there is no plan to fix it yet.
>
> 2018-07-03 8:59 GMT+08:00 Sean Busbey :
>
>> That's just a warning. Checking on HDFS-11644, it's only present in
>> Hadoop 2.9+ so seeing a lack of it with HDFS in 2.8.4 is expected.
>> (Presuming you are deploying on top of HDFS and not e.g.
>> LocalFileSystem.)
>>
>> Are there any ERROR messages in the regionservers or the master logs?
>> Could you post them somewhere and provide a link here?
>>
>> On Mon, Jul 2, 2018 at 5:11 PM, Andrey Elenskiy
>>  wrote:
>> > It's now stuck at Master Initializing and regionservers are complaining
>> > with:
>> >
>> > 18/07/02 21:12:20 WARN util.CommonFSUtils: Your Hadoop installation does
>> > not include the StreamCapabilities class from HDFS-11644, so we will
>> skip
>> > checking if any FSDataOutputStreams actually support hflush/hsync. If
>> you
>> > are running on top of HDFS this probably just means you have an older
>> > version and this can be ignored. If you are running on top of an
>> alternate
>> > FileSystem implementation you should manually verify that hflush and
>> hsync
>> > are implemented; otherwise you risk data loss and hard to diagnose
>> errors
>> > when our assumptions are violated.
>> >
>> > I'm guessing hbase 2.0.1 on top of 2.8.4 hasn't been ironed out
>> completely
>> > yet (at least not with stock hadoop jars) unless I'm missing something.
>> >
>> > On Mon, Jul 2, 2018 at 3:02 PM, Mich Talebzadeh <
>> mich.talebza...@gmail.com>
>> > wrote:
>> >
>> >> You are lucky that HBASE 2.0.1 worked with Hadoop 2.8
>> >>
>> >> I tried HBASE 2.0.1 with Hadoop 3.1 and there was endless problems
>> with the
>> >> Region server crashing because WAL file system issue.
>> >>
>> >> thread - Hbase hbase-2.0.1, region server does not start on Hadoop 3.1
>> >>
>> >> Decided to roll back to Hbase 1.2.6 that works with Hadoop 3.1
>> >>
>> >> HTH
>> >>
>> >> Dr Mich Talebzadeh
>> >>
>> >>
>> >>
>> >> LinkedIn * https://www.linkedin.com/profile/view?id=
>> >> AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> >> > Jd6zP6AcPCCd
>> >> OABUrV8Pw>*
>> >>
>> >>
>> >>
>> >> http://talebzadehmich.wordpress.com
>> >>
>> >>
>> >> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any
>> >> loss, damage or destruction of data or any other property which may
>> arise
>> >> from relying on this email's technical content is explicitly
>> disclaimed.
>> >> The author will in no case be liable for any monetary damages arising
>> from
>> >> such loss, damage or destruction.
>> >>
>> >>
>> >>
>> >>
>> >> On Mon, 2 Jul 2018 at 22:43, Andrey Elenskiy
>> >>  wrote:
>> >>
>> >> > 
>> >> > hbase.wal.provider
>> >> > filesystem
>> >> > 
>> >> >
>> >> > Seems to fix it, but would be nice to actually try the fanout wal
>> with
>> >> > hadoop 2.8.4.
>> >> >
>> >> > On Mon, Jul 2, 2018 at 1:03 PM, Andrey Elenskiy <
>> >> > andrey.elens...@arista.com>
>> >> > wrote:
>> >> >
>> >> > > Hello, we are running HBase 2.0.1 with official Hadoop 2.8.4 jars
>> and
>> >> > > hadoop 2.8.4 client (http://central.maven.org/
>> >> maven2/org/apache/hadoop/
>> >> > > hadoop-client/2.8.4/). Got the following exception on regionserver
>> >> which
>> >> > > brings it down:
>> >> > >
>> >> > > 18/07/02 18:51:06 WARN concurrent.DefaultPromise: An exception was
>> >> > thrown by org.apache.hadoop.hbase.io
>> >> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete()
>> >> > > java.lang.Error: Couldn't properly initialize access to HDFS
>> internals.
>> >> > Please update your WAL Provider to not make use of the 'asyncfs'
>> >> provider.
>> >> > See HBASE-16110 for more information.
>> >> > >  at org.apache.hadoop.hbase.io
>> >> > .asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(
>> >> FanOutOneBlockAsyncDFSOutputSaslHelper.java:268)
>> >> > >  at org.apache.hadoop.hbase.io
>> >> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(
>> >> FanOutOneBlockAsyncDFSOutputHelper.java:661)
>> >> > >  at org.apache.hadoop.hbase.io
>> >> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(
>> >> FanOutOneBlockAsyncDFSOutputHelper.java:118)
>> >> > >  at org.apache.hadoop.hbase.io
>> >> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(
>> >> FanOutOneBlockAsyncDFSOutputHelper.java:720)
>> >> > >  at org.apache.hadoop.hbase.io
>> >> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(
>> >> FanOutOneBlockAsyncDFSOutputHelper.java:715)
>> >> > >  at
>> >> > org.apache.hbase.thirdparty.io.netty.ut

Re: HBase 2.0.1 with Hadoop 2.8.4 causes NoSuchMethodException

2018-07-02 Thread Duo Zhang
I think it is fine to just use the original hadoop jars in HBase-2.0.1 to
communicate with HDFS-2.8.4 or above?

The async wal has hacked into the internal of DFSClient so it will be
easily broken when HDFS upgraded.

I can take a look at the 2.8.4 problem but for 3.x, there is no production
ready release yet so there is no plan to fix it yet.

2018-07-03 8:59 GMT+08:00 Sean Busbey :

> That's just a warning. Checking on HDFS-11644, it's only present in
> Hadoop 2.9+ so seeing a lack of it with HDFS in 2.8.4 is expected.
> (Presuming you are deploying on top of HDFS and not e.g.
> LocalFileSystem.)
>
> Are there any ERROR messages in the regionservers or the master logs?
> Could you post them somewhere and provide a link here?
>
> On Mon, Jul 2, 2018 at 5:11 PM, Andrey Elenskiy
>  wrote:
> > It's now stuck at Master Initializing and regionservers are complaining
> > with:
> >
> > 18/07/02 21:12:20 WARN util.CommonFSUtils: Your Hadoop installation does
> > not include the StreamCapabilities class from HDFS-11644, so we will skip
> > checking if any FSDataOutputStreams actually support hflush/hsync. If you
> > are running on top of HDFS this probably just means you have an older
> > version and this can be ignored. If you are running on top of an
> alternate
> > FileSystem implementation you should manually verify that hflush and
> hsync
> > are implemented; otherwise you risk data loss and hard to diagnose errors
> > when our assumptions are violated.
> >
> > I'm guessing hbase 2.0.1 on top of 2.8.4 hasn't been ironed out
> completely
> > yet (at least not with stock hadoop jars) unless I'm missing something.
> >
> > On Mon, Jul 2, 2018 at 3:02 PM, Mich Talebzadeh <
> mich.talebza...@gmail.com>
> > wrote:
> >
> >> You are lucky that HBASE 2.0.1 worked with Hadoop 2.8
> >>
> >> I tried HBASE 2.0.1 with Hadoop 3.1 and there was endless problems with
> the
> >> Region server crashing because WAL file system issue.
> >>
> >> thread - Hbase hbase-2.0.1, region server does not start on Hadoop 3.1
> >>
> >> Decided to roll back to Hbase 1.2.6 that works with Hadoop 3.1
> >>
> >> HTH
> >>
> >> Dr Mich Talebzadeh
> >>
> >>
> >>
> >> LinkedIn * https://www.linkedin.com/profile/view?id=
> >> AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >>  AAEWh2gBxianrbJd6zP6AcPCCd
> >> OABUrV8Pw>*
> >>
> >>
> >>
> >> http://talebzadehmich.wordpress.com
> >>
> >>
> >> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> >> loss, damage or destruction of data or any other property which may
> arise
> >> from relying on this email's technical content is explicitly disclaimed.
> >> The author will in no case be liable for any monetary damages arising
> from
> >> such loss, damage or destruction.
> >>
> >>
> >>
> >>
> >> On Mon, 2 Jul 2018 at 22:43, Andrey Elenskiy
> >>  wrote:
> >>
> >> > 
> >> > hbase.wal.provider
> >> > filesystem
> >> > 
> >> >
> >> > Seems to fix it, but would be nice to actually try the fanout wal with
> >> > hadoop 2.8.4.
> >> >
> >> > On Mon, Jul 2, 2018 at 1:03 PM, Andrey Elenskiy <
> >> > andrey.elens...@arista.com>
> >> > wrote:
> >> >
> >> > > Hello, we are running HBase 2.0.1 with official Hadoop 2.8.4 jars
> and
> >> > > hadoop 2.8.4 client (http://central.maven.org/
> >> maven2/org/apache/hadoop/
> >> > > hadoop-client/2.8.4/). Got the following exception on regionserver
> >> which
> >> > > brings it down:
> >> > >
> >> > > 18/07/02 18:51:06 WARN concurrent.DefaultPromise: An exception was
> >> > thrown by org.apache.hadoop.hbase.io
> >> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete()
> >> > > java.lang.Error: Couldn't properly initialize access to HDFS
> internals.
> >> > Please update your WAL Provider to not make use of the 'asyncfs'
> >> provider.
> >> > See HBASE-16110 for more information.
> >> > >  at org.apache.hadoop.hbase.io
> >> > .asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(
> >> FanOutOneBlockAsyncDFSOutputSaslHelper.java:268)
> >> > >  at org.apache.hadoop.hbase.io
> >> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(
> >> FanOutOneBlockAsyncDFSOutputHelper.java:661)
> >> > >  at org.apache.hadoop.hbase.io
> >> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(
> >> FanOutOneBlockAsyncDFSOutputHelper.java:118)
> >> > >  at org.apache.hadoop.hbase.io
> >> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(
> >> FanOutOneBlockAsyncDFSOutputHelper.java:720)
> >> > >  at org.apache.hadoop.hbase.io
> >> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(
> >> FanOutOneBlockAsyncDFSOutputHelper.java:715)
> >> > >  at
> >> > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.
> >> notifyListener0(DefaultPromise.java:507)
> >> > >  at
> >> > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.
> >> notifyListeners0(DefaultPromise.java:500)
> >> > >  at
> >> > org.apache.hbase.thirdparty.io.netty.

Re: HBase 2.0.1 with Hadoop 2.8.4 causes NoSuchMethodException

2018-07-02 Thread Sean Busbey
That's just a warning. Checking on HDFS-11644, it's only present in
Hadoop 2.9+ so seeing a lack of it with HDFS in 2.8.4 is expected.
(Presuming you are deploying on top of HDFS and not e.g.
LocalFileSystem.)

Are there any ERROR messages in the regionservers or the master logs?
Could you post them somewhere and provide a link here?

On Mon, Jul 2, 2018 at 5:11 PM, Andrey Elenskiy
 wrote:
> It's now stuck at Master Initializing and regionservers are complaining
> with:
>
> 18/07/02 21:12:20 WARN util.CommonFSUtils: Your Hadoop installation does
> not include the StreamCapabilities class from HDFS-11644, so we will skip
> checking if any FSDataOutputStreams actually support hflush/hsync. If you
> are running on top of HDFS this probably just means you have an older
> version and this can be ignored. If you are running on top of an alternate
> FileSystem implementation you should manually verify that hflush and hsync
> are implemented; otherwise you risk data loss and hard to diagnose errors
> when our assumptions are violated.
>
> I'm guessing hbase 2.0.1 on top of 2.8.4 hasn't been ironed out completely
> yet (at least not with stock hadoop jars) unless I'm missing something.
>
> On Mon, Jul 2, 2018 at 3:02 PM, Mich Talebzadeh 
> wrote:
>
>> You are lucky that HBASE 2.0.1 worked with Hadoop 2.8
>>
>> I tried HBASE 2.0.1 with Hadoop 3.1 and there was endless problems with the
>> Region server crashing because WAL file system issue.
>>
>> thread - Hbase hbase-2.0.1, region server does not start on Hadoop 3.1
>>
>> Decided to roll back to Hbase 1.2.6 that works with Hadoop 3.1
>>
>> HTH
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=
>> AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> > OABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
>> loss, damage or destruction of data or any other property which may arise
>> from relying on this email's technical content is explicitly disclaimed.
>> The author will in no case be liable for any monetary damages arising from
>> such loss, damage or destruction.
>>
>>
>>
>>
>> On Mon, 2 Jul 2018 at 22:43, Andrey Elenskiy
>>  wrote:
>>
>> > 
>> > hbase.wal.provider
>> > filesystem
>> > 
>> >
>> > Seems to fix it, but would be nice to actually try the fanout wal with
>> > hadoop 2.8.4.
>> >
>> > On Mon, Jul 2, 2018 at 1:03 PM, Andrey Elenskiy <
>> > andrey.elens...@arista.com>
>> > wrote:
>> >
>> > > Hello, we are running HBase 2.0.1 with official Hadoop 2.8.4 jars and
>> > > hadoop 2.8.4 client (http://central.maven.org/
>> maven2/org/apache/hadoop/
>> > > hadoop-client/2.8.4/). Got the following exception on regionserver
>> which
>> > > brings it down:
>> > >
>> > > 18/07/02 18:51:06 WARN concurrent.DefaultPromise: An exception was
>> > thrown by org.apache.hadoop.hbase.io
>> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete()
>> > > java.lang.Error: Couldn't properly initialize access to HDFS internals.
>> > Please update your WAL Provider to not make use of the 'asyncfs'
>> provider.
>> > See HBASE-16110 for more information.
>> > >  at org.apache.hadoop.hbase.io
>> > .asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(
>> FanOutOneBlockAsyncDFSOutputSaslHelper.java:268)
>> > >  at org.apache.hadoop.hbase.io
>> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(
>> FanOutOneBlockAsyncDFSOutputHelper.java:661)
>> > >  at org.apache.hadoop.hbase.io
>> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(
>> FanOutOneBlockAsyncDFSOutputHelper.java:118)
>> > >  at org.apache.hadoop.hbase.io
>> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(
>> FanOutOneBlockAsyncDFSOutputHelper.java:720)
>> > >  at org.apache.hadoop.hbase.io
>> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(
>> FanOutOneBlockAsyncDFSOutputHelper.java:715)
>> > >  at
>> > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.
>> notifyListener0(DefaultPromise.java:507)
>> > >  at
>> > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.
>> notifyListeners0(DefaultPromise.java:500)
>> > >  at
>> > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.
>> notifyListenersNow(DefaultPromise.java:479)
>> > >  at
>> > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.
>> notifyListeners(DefaultPromise.java:420)
>> > >  at
>> > org.apache.hbase.thirdparty.io.netty.util.concurrent.
>> DefaultPromise.trySuccess(DefaultPromise.java:104)
>> > >  at
>> > org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.
>> trySuccess(DefaultChannelPromise.java:82)
>> > >  at
>> > org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$
>> AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:638)
>> > >  at
>> > org

Re: HBase 2.0.1 with Hadoop 2.8.4 causes NoSuchMethodException

2018-07-02 Thread Andrey Elenskiy
It's now stuck at Master Initializing and regionservers are complaining
with:

18/07/02 21:12:20 WARN util.CommonFSUtils: Your Hadoop installation does
not include the StreamCapabilities class from HDFS-11644, so we will skip
checking if any FSDataOutputStreams actually support hflush/hsync. If you
are running on top of HDFS this probably just means you have an older
version and this can be ignored. If you are running on top of an alternate
FileSystem implementation you should manually verify that hflush and hsync
are implemented; otherwise you risk data loss and hard to diagnose errors
when our assumptions are violated.

I'm guessing hbase 2.0.1 on top of 2.8.4 hasn't been ironed out completely
yet (at least not with stock hadoop jars) unless I'm missing something.

On Mon, Jul 2, 2018 at 3:02 PM, Mich Talebzadeh 
wrote:

> You are lucky that HBASE 2.0.1 worked with Hadoop 2.8
>
> I tried HBASE 2.0.1 with Hadoop 3.1 and there was endless problems with the
> Region server crashing because WAL file system issue.
>
> thread - Hbase hbase-2.0.1, region server does not start on Hadoop 3.1
>
> Decided to roll back to Hbase 1.2.6 that works with Hadoop 3.1
>
> HTH
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>  OABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Mon, 2 Jul 2018 at 22:43, Andrey Elenskiy
>  wrote:
>
> > 
> > hbase.wal.provider
> > filesystem
> > 
> >
> > Seems to fix it, but would be nice to actually try the fanout wal with
> > hadoop 2.8.4.
> >
> > On Mon, Jul 2, 2018 at 1:03 PM, Andrey Elenskiy <
> > andrey.elens...@arista.com>
> > wrote:
> >
> > > Hello, we are running HBase 2.0.1 with official Hadoop 2.8.4 jars and
> > > hadoop 2.8.4 client (http://central.maven.org/
> maven2/org/apache/hadoop/
> > > hadoop-client/2.8.4/). Got the following exception on regionserver
> which
> > > brings it down:
> > >
> > > 18/07/02 18:51:06 WARN concurrent.DefaultPromise: An exception was
> > thrown by org.apache.hadoop.hbase.io
> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete()
> > > java.lang.Error: Couldn't properly initialize access to HDFS internals.
> > Please update your WAL Provider to not make use of the 'asyncfs'
> provider.
> > See HBASE-16110 for more information.
> > >  at org.apache.hadoop.hbase.io
> > .asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(
> FanOutOneBlockAsyncDFSOutputSaslHelper.java:268)
> > >  at org.apache.hadoop.hbase.io
> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(
> FanOutOneBlockAsyncDFSOutputHelper.java:661)
> > >  at org.apache.hadoop.hbase.io
> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(
> FanOutOneBlockAsyncDFSOutputHelper.java:118)
> > >  at org.apache.hadoop.hbase.io
> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(
> FanOutOneBlockAsyncDFSOutputHelper.java:720)
> > >  at org.apache.hadoop.hbase.io
> > .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(
> FanOutOneBlockAsyncDFSOutputHelper.java:715)
> > >  at
> > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.
> notifyListener0(DefaultPromise.java:507)
> > >  at
> > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.
> notifyListeners0(DefaultPromise.java:500)
> > >  at
> > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.
> notifyListenersNow(DefaultPromise.java:479)
> > >  at
> > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.
> notifyListeners(DefaultPromise.java:420)
> > >  at
> > org.apache.hbase.thirdparty.io.netty.util.concurrent.
> DefaultPromise.trySuccess(DefaultPromise.java:104)
> > >  at
> > org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.
> trySuccess(DefaultChannelPromise.java:82)
> > >  at
> > org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$
> AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:638)
> > >  at
> > org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$
> AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:676)
> > >  at
> > org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$
> AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:552)
> > >  at
> > org.apache.hbase.thirdparty.io.netty.channel.epoll.
> EpollEventLoop.processReady(EpollEventLoop.java:394)
> > >  at
> > org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(
> EpollEventLoop.

Re: HBase 2.0.1 with Hadoop 2.8.4 causes NoSuchMethodException

2018-07-02 Thread Mich Talebzadeh
You are lucky that HBASE 2.0.1 worked with Hadoop 2.8

I tried HBASE 2.0.1 with Hadoop 3.1 and there was endless problems with the
Region server crashing because WAL file system issue.

thread - Hbase hbase-2.0.1, region server does not start on Hadoop 3.1

Decided to roll back to Hbase 1.2.6 that works with Hadoop 3.1

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Mon, 2 Jul 2018 at 22:43, Andrey Elenskiy
 wrote:

> 
> hbase.wal.provider
> filesystem
> 
>
> Seems to fix it, but would be nice to actually try the fanout wal with
> hadoop 2.8.4.
>
> On Mon, Jul 2, 2018 at 1:03 PM, Andrey Elenskiy <
> andrey.elens...@arista.com>
> wrote:
>
> > Hello, we are running HBase 2.0.1 with official Hadoop 2.8.4 jars and
> > hadoop 2.8.4 client (http://central.maven.org/maven2/org/apache/hadoop/
> > hadoop-client/2.8.4/). Got the following exception on regionserver which
> > brings it down:
> >
> > 18/07/02 18:51:06 WARN concurrent.DefaultPromise: An exception was
> thrown by org.apache.hadoop.hbase.io
> .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete()
> > java.lang.Error: Couldn't properly initialize access to HDFS internals.
> Please update your WAL Provider to not make use of the 'asyncfs' provider.
> See HBASE-16110 for more information.
> >  at org.apache.hadoop.hbase.io
> .asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:268)
> >  at org.apache.hadoop.hbase.io
> .asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
> >  at org.apache.hadoop.hbase.io
> .asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
> >  at org.apache.hadoop.hbase.io
> .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
> >  at org.apache.hadoop.hbase.io
> .asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
> >  at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
> >  at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
> >  at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
> >  at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
> >  at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
> >  at
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
> >  at
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:638)
> >  at
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:676)
> >  at
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:552)
> >  at
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
> >  at
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304)
> >  at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
> >  at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
> >  at java.lang.Thread.run(Thread.java:748)
> >  Caused by: java.lang.NoSuchMethodException:
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
> >  at java.lang.Class.getDeclaredMethod(Class.java:2130)
> >  at org.apache.hadoop.hbase.io
> .asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
> >  at org.apache.hadoop.hbase.io
> .asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
> >  ... 18 more
> >
> >  FYI, we don't have encryption enabled. Let me know if you need more info
> > about our setup.
> >
>


Re: HBase 2.0.1 with Hadoop 2.8.4 causes NoSuchMethodException

2018-07-02 Thread Andrey Elenskiy

hbase.wal.provider
filesystem


Seems to fix it, but would be nice to actually try the fanout wal with
hadoop 2.8.4.

On Mon, Jul 2, 2018 at 1:03 PM, Andrey Elenskiy 
wrote:

> Hello, we are running HBase 2.0.1 with official Hadoop 2.8.4 jars and
> hadoop 2.8.4 client (http://central.maven.org/maven2/org/apache/hadoop/
> hadoop-client/2.8.4/). Got the following exception on regionserver which
> brings it down:
>
> 18/07/02 18:51:06 WARN concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete()
> java.lang.Error: Couldn't properly initialize access to HDFS internals. 
> Please update your WAL Provider to not make use of the 'asyncfs' provider. 
> See HBASE-16110 for more information.
>  at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:268)
>  at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
>  at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
>  at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
>  at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:638)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:676)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:552)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
>  at java.lang.Thread.run(Thread.java:748)
>  Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
>  at java.lang.Class.getDeclaredMethod(Class.java:2130)
>  at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
>  at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
>  ... 18 more
>
>  FYI, we don't have encryption enabled. Let me know if you need more info
> about our setup.
>


HBase 2.0.1 with Hadoop 2.8.4 causes NoSuchMethodException

2018-07-02 Thread Andrey Elenskiy
Hello, we are running HBase 2.0.1 with official Hadoop 2.8.4 jars and
hadoop 2.8.4 client (
http://central.maven.org/maven2/org/apache/hadoop/hadoop-client/2.8.4/).
Got the following exception on regionserver which brings it down:

18/07/02 18:51:06 WARN concurrent.DefaultPromise: An exception was
thrown by 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete()
java.lang.Error: Couldn't properly initialize access to HDFS
internals. Please update your WAL Provider to not make use of the
'asyncfs' provider. See HBASE-16110 for more information.
 at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:268)
 at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
 at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
 at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
 at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
 at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
 at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
 at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
 at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
 at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
 at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
 at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:638)
 at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:676)
 at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:552)
 at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
 at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304)
 at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
 at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
 at java.lang.Thread.run(Thread.java:748)
 Caused by: java.lang.NoSuchMethodException:
org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
 at java.lang.Class.getDeclaredMethod(Class.java:2130)
 at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
 at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
 ... 18 more

 FYI, we don't have encryption enabled. Let me know if you need more info
about our setup.


Re: Hbase hbase-2.0.1, region server does not start on Hadoop 3.1

2018-07-02 Thread Mich Talebzadeh
Hi Sean,

Many thanks for the clarification. I read some notes on GitHub and JIRAs
for Hbase and Hadoop 3 integration.

So my decision was to revert back to an earlier stable version of Hbase as
I did not have the bandwidth trying to make Hbase work with Hadoop 3+

In fairness to Ted, he has always been very knowledgeable and helpful in
the forum and being an engineer myself, I would not think Ted's suggestion
was far off.

Kind Regards,

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Mon, 2 Jul 2018 at 14:27, Sean Busbey  wrote:

> Hi Mich,
>
> Please check out the section of our reference guide on Hadoop versions:
>
> http://hbase.apache.org/book.html#hadoop
>
> the short version is that there is not yet a Hadoop 3 version that the
> HBase community considers appropriate for running HBase. if you'd like
> to get into details and work arounds, please join the dev@hbase
> mailing list and bring it up there.
>
> Ted, please stop suggesting folks on the user list use anything other
> than PMC sanctioned releases of HBase.
>
> On Sun, Jul 1, 2018 at 1:09 AM, Mich Talebzadeh
>  wrote:
> > Hi,
> >
> > What is the ETA with version of Hbase that will work with Hadoop 3.1 and
> > may not require HA setup for HDFS?
> >
> > Thanks
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> >
> > On Sun, 1 Jul 2018 at 00:26, Mich Talebzadeh 
> > wrote:
> >
> >> Thanks Ted.
> >>
> >> Went back to hbase-1.2.6 that works OK with Hadoop 3.1
> >>
> >> Dr Mich Talebzadeh
> >>
> >>
> >>
> >> LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >> <
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >*
> >>
> >>
> >>
> >> http://talebzadehmich.wordpress.com
> >>
> >>
> >> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> >> loss, damage or destruction of data or any other property which may
> arise
> >> from relying on this email's technical content is explicitly disclaimed.
> >> The author will in no case be liable for any monetary damages arising
> from
> >> such loss, damage or destruction.
> >>
> >>
> >>
> >>
> >> On Sun, 1 Jul 2018 at 00:15, Ted Yu  wrote:
> >>
> >>> Have you tried setting the value for the config to filesystem ?
> >>>
> >>> Cheers
> >>>
> >>> On Sat, Jun 30, 2018 at 4:07 PM, Mich Talebzadeh <
> >>> mich.talebza...@gmail.com>
> >>> wrote:
> >>>
> >>> > One way would be to set WAL outside of Hadoop environment. Will that
> >>> work?
> >>> >
> >>> > The following did not work
> >>> >
> >>> > 
> >>> >   hbase.wal.provider
> >>> >   multiwal
> >>> > 
> >>> >
> >>> >
> >>> > Dr Mich Talebzadeh
> >>> >
> >>> >
> >>> >
> >>> > LinkedIn * https://www.linkedin.com/profile/view?id=
> >>> > AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >>> > <
> >>>
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCd
> >>> > OABUrV8Pw>*
> >>> >
> >>> >
> >>> >
> >>> > http://talebzadehmich.wordpress.com
> >>> >
> >>> >
> >>> > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> >>> any
> >>> > loss, damage or destruction of data or any other property which may
> >>> arise
> >>> > from relying on this email's technical content is explicitly
> disclaimed.
> >>> > The author will in no case be liable for any monetary damages arising
> >>> from
> >>> > such loss, damage or destruction.
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > On Sat, 30 Jun 2018 at 23:36, Ted Yu  wrote:
> >>> >
> >>> > > Please read :
> >>> > >
> >>> > > http://hbase.apache.org/book.html#wal.providers
> >>> > >
> >>> > > On Sat, Jun 30, 2018 at 3:31 PM, Mich Talebzadeh <
> >>> > > mich.talebza...@gmail.com>
> >>> > > wrote:
> >>> > >
> >>> > > > Thanks
> >>> > > >
> >>> > > > In your point below
> >>> > > >
> >>> > > > …. or you can change default WAL to FSHLog.
> >>> > > >
> >>> > > > is there any configuration parameter t

Re: Hbase hbase-2.0.1, region server does not start on Hadoop 3.1

2018-07-02 Thread Sean Busbey
Hi Mich,

Please check out the section of our reference guide on Hadoop versions:

http://hbase.apache.org/book.html#hadoop

the short version is that there is not yet a Hadoop 3 version that the
HBase community considers appropriate for running HBase. if you'd like
to get into details and work arounds, please join the dev@hbase
mailing list and bring it up there.

Ted, please stop suggesting folks on the user list use anything other
than PMC sanctioned releases of HBase.

On Sun, Jul 1, 2018 at 1:09 AM, Mich Talebzadeh
 wrote:
> Hi,
>
> What is the ETA with version of Hbase that will work with Hadoop 3.1 and
> may not require HA setup for HDFS?
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sun, 1 Jul 2018 at 00:26, Mich Talebzadeh 
> wrote:
>
>> Thanks Ted.
>>
>> Went back to hbase-1.2.6 that works OK with Hadoop 3.1
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
>> loss, damage or destruction of data or any other property which may arise
>> from relying on this email's technical content is explicitly disclaimed.
>> The author will in no case be liable for any monetary damages arising from
>> such loss, damage or destruction.
>>
>>
>>
>>
>> On Sun, 1 Jul 2018 at 00:15, Ted Yu  wrote:
>>
>>> Have you tried setting the value for the config to filesystem ?
>>>
>>> Cheers
>>>
>>> On Sat, Jun 30, 2018 at 4:07 PM, Mich Talebzadeh <
>>> mich.talebza...@gmail.com>
>>> wrote:
>>>
>>> > One way would be to set WAL outside of Hadoop environment. Will that
>>> work?
>>> >
>>> > The following did not work
>>> >
>>> > 
>>> >   hbase.wal.provider
>>> >   multiwal
>>> > 
>>> >
>>> >
>>> > Dr Mich Talebzadeh
>>> >
>>> >
>>> >
>>> > LinkedIn * https://www.linkedin.com/profile/view?id=
>>> > AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > <
>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCd
>>> > OABUrV8Pw>*
>>> >
>>> >
>>> >
>>> > http://talebzadehmich.wordpress.com
>>> >
>>> >
>>> > *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any
>>> > loss, damage or destruction of data or any other property which may
>>> arise
>>> > from relying on this email's technical content is explicitly disclaimed.
>>> > The author will in no case be liable for any monetary damages arising
>>> from
>>> > such loss, damage or destruction.
>>> >
>>> >
>>> >
>>> >
>>> > On Sat, 30 Jun 2018 at 23:36, Ted Yu  wrote:
>>> >
>>> > > Please read :
>>> > >
>>> > > http://hbase.apache.org/book.html#wal.providers
>>> > >
>>> > > On Sat, Jun 30, 2018 at 3:31 PM, Mich Talebzadeh <
>>> > > mich.talebza...@gmail.com>
>>> > > wrote:
>>> > >
>>> > > > Thanks
>>> > > >
>>> > > > In your point below
>>> > > >
>>> > > > …. or you can change default WAL to FSHLog.
>>> > > >
>>> > > > is there any configuration parameter to allow me to do so in
>>> > > > hbase-site.xml?
>>> > > >
>>> > > > Dr Mich Talebzadeh
>>> > > >
>>> > > >
>>> > > >
>>> > > > LinkedIn * https://www.linkedin.com/profile/view?id=
>>> > > > AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > > > >> > AAEWh2gBxianrbJd6zP6AcPCCd
>>> > > > OABUrV8Pw>*
>>> > > >
>>> > > >
>>> > > >
>>> > > > http://talebzadehmich.wordpress.com
>>> > > >
>>> > > >
>>> > > > *Disclaimer:* Use it at your own risk. Any and all responsibility
>>> for
>>> > any
>>> > > > loss, damage or destruction of data or any other property which may
>>> > arise
>>> > > > from relying on this email's technical content is explicitly
>>> > disclaimed.
>>> > > > The author will in no case be liable for any monetary damages
>>> arising
>>> > > from
>>> > > > such loss, damage or destruction.
>>> > > >
>>> > > >
>>> > > >
>>> > > >
>>> > > > On Sat, 30 Jun 2018 at 23:25, Ted Yu  wrote:
>>> > > >
>>> > > > > Do you plan to deploy onto hadoop 3.1.x ?
>>> > > > >
>>> > > > > If so, you'd better build against hadoop 3.1.x yourself.
>>> > > > > You can either patch in HBASE-20244 and use asyncfswal.
>>> > > > > Or you can change default WAL to FSHLog.
>>> > > > >
>>> > > > > If you don't have to deploy onto hadoop 3.1.x, you can use hbase
>>> > 2.0.1
>>> > > > >
>>> > > > > FYI
>>> > > > >
>>

Re: HBase scan with setBatch for more than 1 column family

2018-07-02 Thread Ted Yu
Please see the following two constants defined in TableInputFormat :

  /** Column Family to Scan */

  public static final String SCAN_COLUMN_FAMILY =
"hbase.mapreduce.scan.column.family";

  /** Space delimited list of columns and column families to scan. */

  public static final String SCAN_COLUMNS = "hbase.mapreduce.scan.columns";

CellCounter accepts these parameters. You can play with CellCounter to see
how they work.


FYI

On Mon, Jul 2, 2018 at 4:01 AM, revolutionisme 
wrote:

> Hi,
>
> I am using HBase with Spark and as I have wide columns (> 1) I wanted
> to
> use the "setbatch(num)" option to not read all the columns for a row but in
> batches.
>
> I can create a scan and set the batch size I want with
> TableInputFormat.SCAN_BATCHSIZE, but I am a bit confused how this would
> work
> with more than 1 column family.
>
> Any help is appreciated.
>
> PS: Also any documentation or inputs on newAPIHadoopRDD would be really
> appreciated as well.
>
> Thanks & Regards,
> Biplob
>
>
>
> --
> Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-
> f4020416.html
>


HBase scan with setBatch for more than 1 column family

2018-07-02 Thread revolutionisme
Hi,

I am using HBase with Spark and as I have wide columns (> 1) I wanted to
use the "setbatch(num)" option to not read all the columns for a row but in
batches. 

I can create a scan and set the batch size I want with
TableInputFormat.SCAN_BATCHSIZE, but I am a bit confused how this would work
with more than 1 column family. 

Any help is appreciated.

PS: Also any documentation or inputs on newAPIHadoopRDD would be really
appreciated as well. 

Thanks & Regards,
Biplob



--
Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html