Re: HBase not replaying WAL file

2018-05-22 Thread Zach York
Nandakishore,

Are the edits persisted to the WAL? The RS will only be able to recover
edits that were persisted to the WAL.
Therefore, if writes happened that had not been synced/flushed to the WAL
when the node was terminated, it is expected that there could be a loss of
that data.
However, from the client side, I don't believe the write would have
succeeded (someone can correct me on this as I don't have concrete
evidence) if the data hadn't been fsynced/flushed to the WAL.

Thanks,
Zach

On Sun, May 20, 2018 at 11:27 PM, Nandakishore Arvapaly <
nandak.arvap...@bigcommerce.com> wrote:

> Hi Team,
>
> I have a issue with HBase cluster.
>
> I have hosted an HBase cluster with Phoenix on EMR emr-5.8.0. I have 1
> master and 5 slaves 4.x large nodes. I’m losing the data while querying a
> table after a region server dies. Here are the steps I followed.
>
> 1. Launched the cluster with hfs replication factory as 3.
> 2. Created the tables and loaded the data using Phoenix.
> 3. Cross checked the data I loaded into tables and I see the data.
> 4. Wantedly terminated a EC2 machine which is part of cluster, meaning
> killing region server.
> 5. I could see EMR resizing and bringing up the new node.
> 6. When I query the table after the whole cluster is stable, which usually
> took 5-10 minutes, I see losing some data which is on dead RS.
>
> I believe HBase replays the WAL once new node is brought up and I could
> also see the WAL file on HDFS new RS’s directory. But somehow I don’t see
> the complete data from the table.
>
> Could you please let me know what possibly could go wrong. Also please let
> me know if I have to set any properties.
>
> I would be happy to provide more details if you need.
>
> Thanks,
> Nandakishore.
>


[jira] [Resolved] (HBASE-20608) Remove build option of error prone profile for branch-1 after HBASE-12350

2018-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-20608.

   Resolution: Fixed
 Assignee: Andrew Purtell  (was: Mike Drob)
Fix Version/s: 1.5.0

Committed my hack. We can open another issue for a more nuanced fix

> Remove build option of error prone profile for branch-1 after HBASE-12350
> -
>
> Key: HBASE-20608
> URL: https://issues.apache.org/jira/browse/HBASE-20608
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.4.4, 1.4.5
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0
>
>
> After HBASE-12350, error prone profile was introduced/backported to branch-1 
> and branch-2. However, branch-1 is still building with JDK 7 and is 
> incompatible with this error prone profile such that `mvn test-compile` 
> failed since then. 
> Open this issue to track the removal of `-PerrorProne` in the build command 
> (in Jenkins)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20621) Unclear error for deleting from not existing table.

2018-05-22 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created HBASE-20621:
---

 Summary: Unclear error for deleting from not existing table. 
 Key: HBASE-20621
 URL: https://issues.apache.org/jira/browse/HBASE-20621
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.0
Reporter: Sergey Soldatov


When I try to delete a row from a not existing table, the error is quite 
confusing. Instead of getting a table not found exception I got 
{noformat}
ERROR [main] client.AsyncRequestFutureImpl: Cannot get replica 0 location for 
{"totalColumns":1,"row":"r1","families":{"c1":[{"qualifier":"","vlen":0,"tag":[],"timestamp":9223372036854775807}]},"ts":9223372036854775807}

ERROR: Failed 1 action: t1: 1 time, servers with issues: null
{noformat}
That happens, because delete is using AsyncRequestFuture which wraps all region 
location errors into 'Cannot get replica' error.  I expect that others actions 
like batch, mutateRow, checkAndDelete behave in the same way. 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20486) Change default throughput controller to PressureAwareThroughputController in branch-1

2018-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-20486.

  Resolution: Fixed
Hadoop Flags: Reviewed

Pushed to branch-1. Thanks for the patch [~xucang]

> Change default throughput controller to PressureAwareThroughputController in 
> branch-1
> -
>
> Key: HBASE-20486
> URL: https://issues.apache.org/jira/browse/HBASE-20486
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-20486.branch-1.001.patch
>
>
> Switch the default throughput controller from NoLimitThroughputController to 
> PressureAwareThroughputController in branch-1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20620) HBASE-20564 Tighter ByteBufferKeyValue Cell Comparator part 2

2018-05-22 Thread stack (JIRA)
stack created HBASE-20620:
-

 Summary: HBASE-20564 Tighter ByteBufferKeyValue Cell Comparator 
part 2
 Key: HBASE-20620
 URL: https://issues.apache.org/jira/browse/HBASE-20620
 Project: HBase
  Issue Type: Sub-task
  Components: Performance
Affects Versions: 2.0.0
Reporter: stack
Assignee: stack
 Fix For: 2.0.1


This is a follow-on from HBASE-20564 "Tighter ByteBufferKeyValue Cell 
Comparator". In this issue, we make a stripped-down comparator that we deploy 
in one location only, as memstore Comparator. HBASE-20564 operated on 
CellComparator/Impl and got us 5-10k more throughput on top of a baseline 40k 
or so. A purposed stripped-down ByteBufferKeyValue comparator that fosters 
better inlining gets us from 45-50k up to about 75k (1.4 does 110-115k no-WAL 
PE writes). Data coming. Log of profiling kept here: 
https://docs.google.com/document/d/1vZ_k6_pNR1eQxID5u1xFihuPC7FkPaJQW8c4M5eA2AQ/edit#



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20619) TestWeakObjectPool occasionally times out

2018-05-22 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-20619:
--

 Summary: TestWeakObjectPool occasionally times out
 Key: HBASE-20619
 URL: https://issues.apache.org/jira/browse/HBASE-20619
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 1.4.4, 1.5.0
Reporter: Andrew Purtell


TestWeakObjectPool occasionally times out. Failure is rare and executor is an 
EC2 instance, so I think it's just a question of the timeout being too small.

[ERROR] testCongestion(org.apache.hadoop.hbase.util.TestWeakObjectPool)  Time 
elapsed: 1.049 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 1000 
milliseconds
at 
org.apache.hadoop.hbase.util.TestWeakObjectPool.testCongestion(TestWeakObjectPool.java:102)





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20618) Skip large rows instead of throwing an exception to client

2018-05-22 Thread Swapna (JIRA)
Swapna created HBASE-20618:
--

 Summary: Skip large rows instead of throwing an exception to client
 Key: HBASE-20618
 URL: https://issues.apache.org/jira/browse/HBASE-20618
 Project: HBase
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Swapna


Currently HBase supports throwing RowTooBigException incase there is a row with 
one of the column family data exceeds the configured maximum
https://issues.apache.org/jira/browse/HBASE-10925?attachmentOrder=desc
We have some bad rows growing very large. We need a way to skip these rows for 
most of our jobs.

Some of the options we considered:
Option 1:
Hbase client handle the exception and restart the scanner past bad row by 
capturing the row key where it failed. Can be by adding the rowkey to the 
exception stack trace, which seems brittle. Client would ignore the setting if 
its upgraded before server.

Option 2:
Skip through big rows on Server.Go with server level config similar to 
"hbase.table.max.rowsize" or request based by changing the scan request api. If 
allowed to do per request, based on the scan request config, Client will have 
to ignore the setting if its upgraded before server.
{code}
try {
 populateResult(results, this.storeHeap, scannerContext, current);
 } catch(RowTooBigException e) {
 LOG.info("Row exceeded the limit in storeheap. Skipping row with 
key:"+Bytes.toString(current.getRowArray()));
 this.storeHeap.reseek(PrivateCellUtil.createLastOnRow(current));
 results.clear();
 scannerContext.clearProgress();
 continue;
 }
{code}


Prefer the option 2 with server level config. Please share your inputs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20617) Upgrade/remove jetty-jsp

2018-05-22 Thread Sakthi (JIRA)
Sakthi created HBASE-20617:
--

 Summary: Upgrade/remove jetty-jsp
 Key: HBASE-20617
 URL: https://issues.apache.org/jira/browse/HBASE-20617
 Project: HBase
  Issue Type: Improvement
Reporter: Sakthi


jetty-jsp removed after jetty-9.2.x version. We use the 9.2 version. Research 
so far brings out that apache-jsp might be of interest to us in jetty-9.4.x 
version(as JettyJspServlet.class is in apache-jsp). Yet to figure out about 
jetty-9.3.x.

Filing to track this along.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20616) TruncateTableProcedure is stuck in retry loop in TRUNCATE_TABLE_CREATE_FS_LAYOUT state

2018-05-22 Thread Toshihiro Suzuki (JIRA)
Toshihiro Suzuki created HBASE-20616:


 Summary: TruncateTableProcedure is stuck in retry loop in 
TRUNCATE_TABLE_CREATE_FS_LAYOUT state
 Key: HBASE-20616
 URL: https://issues.apache.org/jira/browse/HBASE-20616
 Project: HBase
  Issue Type: Bug
  Components: amv2
 Environment: HDP-2.5.3
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


At first, TruncateTableProcedure failed to write some files to HDFS in 
TRUNCATE_TABLE_CREATE_FS_LAYOUT state for some reason.
{code:java}
2018-05-15 08:00:25,346 WARN  [ProcedureExecutorThread-8] 
procedure.TruncateTableProcedure: Retriable error trying to truncate 
table=: state=TRUNCATE_TABLE_CREATE_FS_LAYOUT
java.io.IOException: java.util.concurrent.ExecutionException: 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/apps/hbase/data/.tmp/data.regioninfo could only 
be replicated to 0 nodes instead of minReplication (=1).  There are  datanode(s) running and no node(s) are excluded in this operation.
...
{code}
But at this time, seemed like writing some files to HDFS was successful.

And then, TruncateTableProcedure was stuck in retry loop in 
TRUNCATE_TABLE_CREATE_FS_LAYOUT state. At this point, the following log 
messages were shown repeatedly in the master log:
{code:java}
2018-05-15 08:00:25,463 WARN  [ProcedureExecutorThread-8] 
procedure.TruncateTableProcedure: Retriable error trying to truncate 
table=: state=TRUNCATE_TABLE_CREATE_FS_LAYOUT
java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: The specified region already exists on disk: 
hdfs:///apps/hbase/data/.tmp/data///
...
{code}
It seems like this is because TruncateTableProcedure tried to write the files 
that were written successfully in the first try.

I think we need to delete all the files and directories that are written 
successfully in the previous try before retrying the 
TRUNCATE_TABLE_CREATE_FS_LAYOUT state.

Actually, this issue was observed in HDP-2.5.3, but I think the upstream has 
the same issue. Also, it looks to me that CreateTableProcedure has a similar 
issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-6555) Avoid ssh to localhost in startup scripts

2018-05-22 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reopened HBASE-6555:

  Assignee: Sean Busbey

I just ran into this again when trying to test HBASE-20334 locally.  going to 
give fixing it a shot.

> Avoid ssh to localhost in startup scripts
> -
>
> Key: HBASE-6555
> URL: https://issues.apache.org/jira/browse/HBASE-6555
> Project: HBase
>  Issue Type: Improvement
>  Components: scripts
> Environment: Mac OSX Mountain Lion, HBase 89-fb
>Reporter: Ramkumar Vadali
>Assignee: Sean Busbey
>Priority: Trivial
>
> The use of ssh in scripts like zookeepers.sh and regionservers.sh for a 
> single node setup is not necessary. We can execute the command directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20615) include shaded artifacts in binary install

2018-05-22 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-20615:
---

 Summary: include shaded artifacts in binary install
 Key: HBASE-20615
 URL: https://issues.apache.org/jira/browse/HBASE-20615
 Project: HBase
  Issue Type: Sub-task
  Components: build, Client, Usability
Affects Versions: 2.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 3.0.0, 2.1.0


Working through setting up an IT for our shaded artifacts in HBASE-20334 makes 
our lack of packaging seem like an oversight. While I could work around by 
pulling the shaded clients out of whatever build process built the convenience 
binary that we're trying to test, it seems v awkward.

After reflecting on it more, it makes more sense to me for there to be a common 
place in the install that folks running jobs against the cluster can rely on. 
If they need to run without a full hbase install, that should still work fine 
via e.g. grabbing from the maven repo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20614) REST scan API with incorrect filter text file throws HTTP 503 Service Unavailable error

2018-05-22 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20614:
--

 Summary: REST scan API with incorrect filter text file throws HTTP 
503 Service Unavailable error
 Key: HBASE-20614
 URL: https://issues.apache.org/jira/browse/HBASE-20614
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


HBase rest server returns a {{503 Server Unavailable}} when generating a 
scanner object fails using the hbase rest server interface.

The error code returned by hbase rest server is incorrect and may mislead the 
user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20613) Limiting region count results in large size regions

2018-05-22 Thread Biju Nair (JIRA)
Biju Nair created HBASE-20613:
-

 Summary: Limiting region count results in large size regions
 Key: HBASE-20613
 URL: https://issues.apache.org/jira/browse/HBASE-20613
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Biju Nair


Restricting the total number of regions in a {{namespace}}, results in large 
regions. Not sure whether it is by design i.e. restrict the number of region 
even if the region sizes grow large. Steps to recreate.
 - Create a namespace with restriction on the number of regions

{noformat}
create_namespace 'ns2', 
{'hbase.namespace.quota.maxtables'=>'2','hbase.namespace.quota.maxregions'=>'5'}{noformat}

 - Create a table and run {{PE}} to load data. The table can be created with a 
small _hbase_._hregion_._max_._filesize_
 - After 5 regions, the region size grows beyond 
_hbase_._hregion_._max_._filesize._ Since the region count restriction prevents 
splits, the region size grows

Will it be better to gracefully error out when regions can't be split? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: help using the Cell API

2018-05-22 Thread Sean Busbey
On Tue, May 22, 2018 at 9:13 AM, Chia-Ping Tsai  wrote:
>> I don't see why we'd treat them different from any other members.
>
> The rule "If a class is not annotated with one of these, it is assumed to be 
> a InterfaceAudience.Private class." is in hbase docs. Perhaps we should 
> revise this description?
>
>

Oh I see. yeah we should clarify that nested classes take on the IA of
their enclosing class.


Re: help using the Cell API

2018-05-22 Thread Chia-Ping Tsai
> I don't see why we'd treat them different from any other members.

The rule "If a class is not annotated with one of these, it is assumed to be a 
InterfaceAudience.Private class." is in hbase docs. Perhaps we should revise 
this description?


On 2018/05/22 14:03:04, Sean Busbey  wrote: 
> On Tue, May 22, 2018 at 8:34 AM, Chia-Ping Tsai  wrote:
> >> Ah, so maybe this is just YETUS-628 "Class level InterfaceAudience is
> >> not applied to internal classes/enums" ?
> >
> > I didn't trace YETUS-628. But, ya the concept of YETUS-628 is same to me. 
> > Nevertheless, seems our docs haven't mentioned the such definition to 
> > nested class. If we can reach the consensus on the new rule, it is worth 
> > putting it in hbase docs.
> >
> 
> 
> I don't see why we'd treat them different from any other members.
> 


Re: help using the Cell API

2018-05-22 Thread Sean Busbey
On Tue, May 22, 2018 at 8:34 AM, Chia-Ping Tsai  wrote:
>> Ah, so maybe this is just YETUS-628 "Class level InterfaceAudience is
>> not applied to internal classes/enums" ?
>
> I didn't trace YETUS-628. But, ya the concept of YETUS-628 is same to me. 
> Nevertheless, seems our docs haven't mentioned the such definition to nested 
> class. If we can reach the consensus on the new rule, it is worth putting it 
> in hbase docs.
>


I don't see why we'd treat them different from any other members.


Re: help using the Cell API

2018-05-22 Thread Chia-Ping Tsai
> Ah, so maybe this is just YETUS-628 "Class level InterfaceAudience is
> not applied to internal classes/enums" ?

I didn't trace YETUS-628. But, ya the concept of YETUS-628 is same to me. 
Nevertheless, seems our docs haven't mentioned the such definition to nested 
class. If we can reach the consensus on the new rule, it is worth putting it in 
hbase docs.

> is it worth reopening HBASE-18702 and making it a top level issue, or
> should I just open a new issue?

Open a new issue. Let the old issue go.

> How about a "getCellBuilder" or "getCellBuilderFactory" method for
> Mutation implementations that gives you a CellBuilder instance that
> already has relevant parts set? Like for a Put instance it should be
> able to already have the Type and Row set.

Good point! +1

> At a minimum the examples in the reference guide need to use
> "addColumn" method instead of referring to an "add" method that no
> longer exists. It would also be nice if there was at least one example
> that used the Cell API instead of a bunch of byte arrays.
> 
> Probably it'd be good to also update the javadocs for the methods that
> take a Cell to give a pointer to something about how to make Cells.

LGTM. +1

Thanks for making Cell APIs more graceful and useful!!!

--
chia-ping

On 2018/05/22 13:16:45, Sean Busbey  wrote: 
> On Tue, May 22, 2018 at 7:44 AM, Chia-Ping Tsai  wrote:
> >>the Cell.Type enum is IA.Private
> >
> > Cell.Type should be IA.Public. I planed to add the IA.Public to Cell.Type 
> > in HBASE-20121 but I assume the nested class should have the same IA with 
> > the root class...Perhaps we can file a jira to fix it.
> >
> 
> Ah, so maybe this is just YETUS-628 "Class level InterfaceAudience is
> not applied to internal classes/enums" ?
> 
> 
> >> Before I go to update the docs,
> >
> > There was a related issue (HBASE-18702) although I make it stale..
> >
> 
> is it worth reopening HBASE-18702 and making it a top level issue, or
> should I just open a new issue?
> 
> 
> >> If Cell is fine to use, could we update this to be less repetitive?
> >
> > The CellBuilder is designed to help folks to use the Put#add(Cell), 
> > Delete#add(Cell), Increment#add(Cell), and Append#add(Cell) since HBase 
> > doesn't supply the IA.Public implementation of Cell.
> >
> 
> How about a "getCellBuilder" or "getCellBuilderFactory" method for
> Mutation implementations that gives you a CellBuilder instance that
> already has relevant parts set? Like for a Put instance it should be
> able to already have the Type and Row set.
> 
> > And pardon me, what stuff you plan to update?
> 
> At a minimum the examples in the reference guide need to use
> "addColumn" method instead of referring to an "add" method that no
> longer exists. It would also be nice if there was at least one example
> that used the Cell API instead of a bunch of byte arrays.
> 
> Probably it'd be good to also update the javadocs for the methods that
> take a Cell to give a pointer to something about how to make Cells.
> 


Re: help using the Cell API

2018-05-22 Thread Sean Busbey
On Tue, May 22, 2018 at 7:44 AM, Chia-Ping Tsai  wrote:
>>the Cell.Type enum is IA.Private
>
> Cell.Type should be IA.Public. I planed to add the IA.Public to Cell.Type in 
> HBASE-20121 but I assume the nested class should have the same IA with the 
> root class...Perhaps we can file a jira to fix it.
>

Ah, so maybe this is just YETUS-628 "Class level InterfaceAudience is
not applied to internal classes/enums" ?


>> Before I go to update the docs,
>
> There was a related issue (HBASE-18702) although I make it stale..
>

is it worth reopening HBASE-18702 and making it a top level issue, or
should I just open a new issue?


>> If Cell is fine to use, could we update this to be less repetitive?
>
> The CellBuilder is designed to help folks to use the Put#add(Cell), 
> Delete#add(Cell), Increment#add(Cell), and Append#add(Cell) since HBase 
> doesn't supply the IA.Public implementation of Cell.
>

How about a "getCellBuilder" or "getCellBuilderFactory" method for
Mutation implementations that gives you a CellBuilder instance that
already has relevant parts set? Like for a Put instance it should be
able to already have the Type and Row set.

> And pardon me, what stuff you plan to update?

At a minimum the examples in the reference guide need to use
"addColumn" method instead of referring to an "add" method that no
longer exists. It would also be nice if there was at least one example
that used the Cell API instead of a bunch of byte arrays.

Probably it'd be good to also update the javadocs for the methods that
take a Cell to give a pointer to something about how to make Cells.


Re: help using the Cell API

2018-05-22 Thread Chia-Ping Tsai
>the Cell.Type enum is IA.Private

Cell.Type should be IA.Public. I planed to add the IA.Public to Cell.Type in 
HBASE-20121 but I assume the nested class should have the same IA with the root 
class...Perhaps we can file a jira to fix it.

> Before I go to update the docs,

There was a related issue (HBASE-18702) although I make it stale..

> If Cell is fine to use, could we update this to be less repetitive?

The CellBuilder is designed to help folks to use the Put#add(Cell), 
Delete#add(Cell), Increment#add(Cell), and Append#add(Cell) since HBase doesn't 
supply the IA.Public implementation of Cell. And pardon me, what stuff you plan 
to update?

--
Chia-Ping

On 2018/05/21 22:08:00, Sean Busbey  wrote: 
> Over in HBASE-20334 I'm trying to include a simple example program
> that does a series of Puts to a test table.
> 
> I figured I'd try to use the Cell API to do this, but I've gotten confused.
> 
> Let's say I just want to insert SOME_NUMBER rows, with as little
> specified as possible. Using just public API:
> 
> final CellBuilder builder =
> CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY);
> for (int i = 0; i < SOME_NUMBER; i++) {
>   builder.clear();
>   final byte[] row = Bytes.toBytes(i);
>   final Put put = new Put(row);
>   builder.setRow(row);
>   builder.setFamily(FAMILY_BYTES);
>   put.add(builder.build());
>   table.put(put);
> }
> 
> the above will fail with an IllegalArgumentException:
> 
> Exception in thread "main" java.lang.IllegalArgumentException: The
> type can't be NULL
> at 
> org.apache.hadoop.hbase.ExtendedCellBuilderImpl.checkBeforeBuild(ExtendedCellBuilderImpl.java:143)
> at 
> org.apache.hadoop.hbase.ExtendedCellBuilderImpl.build(ExtendedCellBuilderImpl.java:151)
> at 
> org.apache.hadoop.hbase.ExtendedCellBuilderImpl.build(ExtendedCellBuilderImpl.java:25)
> 
> Looking at the public API javadocs, I don't know what I'm supposed to
> put in the call for setType. the version that takes a byte gives no
> explanation and the Cell.Type enum is IA.Private, so not in the
> javadocs.
> 
> I thought maybe the ref guide would have an example I could use, but
> all the Put examples there use the method that takes a bunch of byte
> arrays instead of a cell.[1]
> 
> If I update the example to use Cell.Type.Put then it works.
> 
> Before I go to update the docs, can someone give me some pointers? How
> am I supposed to get a Cell.Type through public APIs?
> 
> Should we have folks avoid using the Put#add(Cell) method if they
> don't already have a Cell instance?
> 
> If Cell is fine to use, could we update this to be less repetitive?
> 
> -busbey
> 
> [1]: Additionally, with the exception of the spark integration chapter
> the Put examples also appear to be wrong, since they refer to the
> method as "add(byte[],byte[],byte[])" when it's now
> "addColumn(byte[],byte[],byte[])".
> 


[jira] [Created] (HBASE-20612) TestReplicationKillSlaveRSWithSeparateOldWALs sometimes fail because it uses a cluster conn that has been closed.

2018-05-22 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-20612:


 Summary: TestReplicationKillSlaveRSWithSeparateOldWALs sometimes 
fail because it uses a cluster conn that has been closed.
 Key: HBASE-20612
 URL: https://issues.apache.org/jira/browse/HBASE-20612
 Project: HBase
  Issue Type: Bug
Reporter: Zheng Hu
Assignee: Zheng Hu
 Attachments: 
org.apache.hadoop.hbase.replication.TestReplicationKillSlaveRSWithSeparateOldWALs-output.txt

{code}
2018-05-22 06:40:00,614 INFO  [Thread-961] regionserver.HRegionServer(2144): 
* STOPPING region server 'asf911.gq1.ygridcore.net,42867,1526971178277' 
*
2018-05-22 06:40:00,614 INFO  [Thread-961] regionserver.HRegionServer(2158): 
STOPPED: Stopping as part of the test
 
2018-05-22 06:41:01,018 DEBUG [Time-limited test] 
client.ResultBoundedCompletionService(226): Replica 0 returns 
java.net.SocketTimeoutException: callTimeout=6, callDuration=60515: Call to 
asf911.gq1.ygridcore.net/67.195.81.155:42867 failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=34, waitTime=59332, 
rpcTimeout=59322 row 'eee' on table 'test' at 
region=test,eee,1526971188643.5aab2dd2e1d02b4e40be6d00422acd21., 
hostname=asf911.gq1.ygridcore.net,42867,1526971178277, seqNum=2
java.net.SocketTimeoutException: callTimeout=6, callDuration=60515: Call to 
asf911.gq1.ygridcore.net/67.195.81.155:42867 failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=34, waitTime=59332, 
rpcTimeout=59322 row 'eee' on table 'test' at 
region=test,eee,1526971188643.5aab2dd2e1d02b4e40be6d00422acd21., 
hostname=asf911.gq1.ygridcore.net,42867,1526971178277, seqNum=2
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:159)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Call to 
asf911.gq1.ygridcore.net/67.195.81.155:42867 failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=34, waitTime=59332, 
rpcTimeout=59322
at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:180)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:390)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406)
at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:96)
at 
org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:199)
at 
org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:663)
at 
org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:738)
at 
org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:466)
... 1 more
Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=34, 
waitTime=59332, rpcTimeout=59322
at 
org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:200)
... 4 more
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)