I think it is for HBASE itself. But I'll have to wait for more details as
they haven't shared the source code with us. I imagine they want to do a
bunch more testing and other process stuff.
Saad
On Wed, Feb 28, 2018 at 9:45 PM Ted Yu wrote:
> Did the vendor say
Did the vendor say whether the patch is for hbase or some other component ?
Thanks
On Wed, Feb 28, 2018 at 6:33 PM, Saad Mufti wrote:
> Thanks for the feedback, so you guys are right the bucket cache is getting
> disabled due to too many I/O errors from the underlying
Thanks, see my other reply. We have a patch from the vendor but until it
gets promoted to open source we still don't know the real underlying cause,
but you're right the cache got disabled due to too many I/O errors in a
short timespan.
Cheers.
Saad
On Mon, Feb 26, 2018 at 12:24 AM,
Thanks for the feedback, so you guys are right the bucket cache is getting
disabled due to too many I/O errors from the underlying files making up the
bucket cache. Still do not know the exact underlying cause, but we are
working with our vendor to test a patch they provided that seems to have
>From the logs, it seems there were some issue with the file that was used
by the bucket cache. Probably the volume where the file was mounted had
some issues.
If you can confirm that , then this issue should be pretty straightforward.
If not let us know, we can help.
Regards
Ram
On Sun, Feb 25,
Here is related code for disabling bucket cache:
if (this.ioErrorStartTime > 0) {
if (cacheEnabled && (now - ioErrorStartTime) > this.
ioErrorsTolerationDuration) {
LOG.error("IO errors duration time has exceeded " +
ioErrorsTolerationDuration +
"ms, disabling
HI,
I am running an HBase 1.3.1 cluster on AWS EMR. The bucket cache is
configured to use two attached EBS disks of 50 GB each and I provisioned
the bucket cache to be a bit less than the total, at a total of 98 GB per
instance to be on the safe side. My tables have column families set to