Re: [VOTE] KIP-325: Extend Consumer Group Command to Show Beginning Offsets

2018-08-16 Thread Ted Yu
+1

On Thu, Aug 16, 2018 at 12:05 PM Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> I would like to start a vote on KIP-325 which aims at adding a beginning
> offset column to consumer group command describe output.
>
> The KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-325%3A+Extend+Consumer+Group+Command+to+Show+Beginning+Offsets
> Discussion thread:
> https://www.mail-archive.com/dev@kafka.apache.org/msg89203.html
>
> Thanks!
> --Vahid
>
>
>


[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-16 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582932#comment-16582932
 ] 

Ted Yu commented on HBASE-19008:


Failure in TestZKSecretWatcher was not related to the patch.


> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-10.patch, 
> HBASE-19008-11.patch, HBASE-19008-12.patch, HBASE-19008-13.patch, 
> HBASE-19008-14.patch, HBASE-19008-2.patch, HBASE-19008-3.patch, 
> HBASE-19008-4.patch, HBASE-19008-5.patch, HBASE-19008-6.patch, 
> HBASE-19008-7.patch, HBASE-19008-8.patch, HBASE-19008-9.patch, 
> HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-16 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582360#comment-16582360
 ] 

Ted Yu commented on HBASE-19008:


[~reidchan]:
Can you take another look ?

> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-10.patch, 
> HBASE-19008-11.patch, HBASE-19008-12.patch, HBASE-19008-13.patch, 
> HBASE-19008-2.patch, HBASE-19008-3.patch, HBASE-19008-4.patch, 
> HBASE-19008-5.patch, HBASE-19008-6.patch, HBASE-19008-7.patch, 
> HBASE-19008-8.patch, HBASE-19008-9.patch, HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-8037) Missing cast in integer arithmetic in TransactionalIdsGenerator#generateIdsToAbort

2018-08-16 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427664#comment-16427664
 ] 

Ted Yu edited comment on FLINK-8037 at 8/16/18 7:35 AM:


Please rebase PR.


was (Author: yuzhih...@gmail.com):
Please rebase PR .

> Missing cast in integer arithmetic in 
> TransactionalIdsGenerator#generateIdsToAbort
> --
>
> Key: FLINK-8037
> URL: https://issues.apache.org/jira/browse/FLINK-8037
> Project: Flink
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: Greg Hogan
>Priority: Minor
>  Labels: kafka, kafka-connect
>
> {code}
>   public Set generateIdsToAbort() {
> Set idsToAbort = new HashSet<>();
> for (int i = 0; i < safeScaleDownFactor; i++) {
>   idsToAbort.addAll(generateIdsToUse(i * poolSize * 
> totalNumberOfSubtasks));
> {code}
> The operands are integers where generateIdsToUse() expects long parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-8554) Upgrade AWS SDK

2018-08-16 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16507192#comment-16507192
 ] 

Ted Yu edited comment on FLINK-8554 at 8/16/18 7:35 AM:


Or use this JIRA for the next upgrade of AWS SDK.


was (Author: yuzhih...@gmail.com):
Or use this JIRA for the next upgrade of AWS SDK .

> Upgrade AWS SDK
> ---
>
> Key: FLINK-8554
> URL: https://issues.apache.org/jira/browse/FLINK-8554
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>    Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> AWS SDK 1.11.271 fixes a lot of bugs.
> One of which would exhibit the following:
> {code}
> Caused by: java.lang.NullPointerException
>   at com.amazonaws.metrics.AwsSdkMetrics.getRegion(AwsSdkMetrics.java:729)
>   at com.amazonaws.metrics.MetricAdmin.getRegion(MetricAdmin.java:67)
>   at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-7642) Upgrade maven surefire plugin to 2.21.0

2018-08-15 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433258#comment-16433258
 ] 

Ted Yu edited comment on FLINK-7642 at 8/15/18 10:16 PM:
-

SUREFIRE-1439 is in 2.21.0 which is needed for compiling with Java 10.


was (Author: yuzhih...@gmail.com):
SUREFIRE-1439 is in 2.21.0 which is needed for compiling with Java 10 .

> Upgrade maven surefire plugin to 2.21.0
> ---
>
> Key: FLINK-7642
> URL: https://issues.apache.org/jira/browse/FLINK-7642
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>    Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Major
>
> Surefire 2.19 release introduced more useful test filters which would let us 
> run a subset of the test.
> This issue is for upgrading maven surefire plugin to 2.21.0 which contains 
> SUREFIRE-1422



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21062) WALFactory has misleading notion of "default"

2018-08-15 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16581681#comment-16581681
 ] 

Ted Yu commented on HBASE-21062:


lgtm
{code}
+  // we can't us it, we want to fall back to FSHLog which we know works on 
all versions.
{code}
'us it' -> "use it"

> WALFactory has misleading notion of "default"
> -
>
> Key: HBASE-21062
> URL: https://issues.apache.org/jira/browse/HBASE-21062
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1
>
> Attachments: HBASE-21062.001.branch-2.0.patch, 
> HBASE-21062.002.branch-2.0.patch
>
>
> In WALFactory, there is an enum {{Providers}} which has a list of supported 
> WALProvider implementations. In addition to list this, there is also a 
> {{defaultProvider}} (which the Configuration defaults to), that is meant to 
> be our "advertised" default WALProvider.
> However, the implementation of {{getProviderClass}} in WALFactory doesn't 
> actually adhere to the value of this enum, instead *always* returning 
> AsyncFSWal if it can be loaded.
> Having the default value in the enum but then overriding it in the 
> implementation of {{getProviderClass}} is silly and misleading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21033) Separate the StoreHeap from StoreFileHeap

2018-08-15 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16581661#comment-16581661
 ] 

Ted Yu commented on HBASE-21033:


I looked at 21033-branch-1-minimal.txt and think it makes sense.

Probably master patch should be attached (and get a QA run).

> Separate the StoreHeap from StoreFileHeap
> -
>
> Key: HBASE-21033
> URL: https://issues.apache.org/jira/browse/HBASE-21033
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Attachments: 21033-branch-1-minimal.txt, 21033-branch-1-v0.txt
>
>
> Currently KeyValueHeap is used for both, heaps of StoreScanners at the Region 
> level as well as heaps of StoreFileScanners (and a MemstoreScanner) at the 
> Store level.
> This is various problems:
>  # Some incorrect method usage can only be deduced at runtime via runtime 
> exception.
>  # In profiling sessions it's hard to distinguish the two.
>  # It's just not clean :)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-15 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16581561#comment-16581561
 ] 

Ted Yu commented on HBASE-19008:


Please refer to 
https://builds.apache.org/job/PreCommit-HBASE-Build/14049/artifact/patchprocess/diff-checkstyle-hbase-spark.txt
 and fix the warnings there.

Thanks for your patience, Liubang.

> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-10.patch, 
> HBASE-19008-11.patch, HBASE-19008-12.patch, HBASE-19008-2.patch, 
> HBASE-19008-3.patch, HBASE-19008-4.patch, HBASE-19008-5.patch, 
> HBASE-19008-6.patch, HBASE-19008-7.patch, HBASE-19008-8.patch, 
> HBASE-19008-9.patch, HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20676) Give .hbase-snapshot proper ownership upon directory creation

2018-08-15 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20676:
---
Labels: security snapshot  (was: snapshot)

> Give .hbase-snapshot proper ownership upon directory creation
> -
>
> Key: HBASE-20676
> URL: https://issues.apache.org/jira/browse/HBASE-20676
> Project: HBase
>  Issue Type: Task
>    Reporter: Ted Yu
>Priority: Major
>  Labels: security, snapshot
>
> This is continuation of the discussion over HBASE-20668.
> Tthe .hbase-snapshot directory is not created at cluster startup. Normally it 
> is created when snapshot operation is initiated.
> However, if before any snapshot operation is performed, some non-super user 
> from another cluster conducts ExportSnapshot to this cluster, the 
> .hbase-snapshot directory would be created as that user.
> (This is just one scenario that can lead to wrong ownership)
> This JIRA is to seek proper way(s) to ensure that .hbase-snapshot directory 
> would always carry proper onwership and permission upon creation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: difference - filterlist with rowfilters vs multiget

2018-08-15 Thread Ted Yu
For the first code snippet, it seems start row /  stop row can be specified
to narrow the range of scan.

For your last question:

bq. all the filters in the filterList are RowFilters ...

The filter list currently is not doing optimization even if all filters in
the list are RowFilters.

Cheers

On Wed, Aug 15, 2018 at 2:07 AM Biplob Biswas 
wrote:

> Hi,
>
> During our implementation for fetching multiple records from an HBase
> table, we came across a discussion regarding the best way to get records
> out.
>
> The first implementation is something like:
>
>   FilterList filterList = new FilterList(Operator.MUST_PASS_ONE);
> >   for (String rowKey : rowKeys) {
> > filterList.addFilter(new RowFilter(CompareOp.EQUAL,new
> > BinaryComparator(Bytes.toBytes(rowKey;
> >   }
> >
> >   Scan scan = new Scan();
> >   scan.setFilter(filterList);
> >   ResultScanner resultScanner = table.getScanner(scan);
>
>
> and the second implementation is somethign like this:
>
>  List listGet = rowKeys.stream()
> >   .map(entry -> {
> > Get get = new Get(Bytes.toBytes(entry));
> > return get;
> >   })
> >   .collect(Collectors.toList());
> >   Result[] results = table.get(listGet)
>
>
> The only difference I see directly is that filterList would do a full table
> scan whereas multiget wouldn't do anything as such.
>
> But what other benefits one has over the other? Also, when HBase finds out
> that all the filters in the filterList are RowFilters, would it perform
> some kind of optimization and perform multiget rather than doing a full
> table scan?
>
>
> Thanks & Regards
> Biplob Biswas
>


[jira] [Updated] (HBASE-20892) [UI] Start / End keys are empty on table.jsp

2018-08-15 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20892:
---
Labels: web-ui  (was: )

> [UI] Start / End keys are empty on table.jsp
> 
>
> Key: HBASE-20892
> URL: https://issues.apache.org/jira/browse/HBASE-20892
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.1
>    Reporter: Ted Yu
>Priority: Major
>  Labels: web-ui
>
> When viewing table.jsp?name=TestTable , I found that the Start / End keys for 
> all the regions were simply dashes without real value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21040) Replace call to printStackTrace() with proper logger call

2018-08-15 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21040:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Subrat.

I modified the subject of the patch to match the JIRA title.

> Replace call to printStackTrace() with proper logger call
> -
>
> Key: HBASE-21040
> URL: https://issues.apache.org/jira/browse/HBASE-21040
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: Subrat Mishra
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-21040.master.001.patch, 
> HBASE-21040.master.002.patch
>
>
> Here is related code:
> {code}
> hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/RestoreDriver.java: 
>  e.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientAsyncPrefetchScanner.java:
>   first.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java:
> t.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java:   
>e.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java:   
>e.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java:   
>e.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java:   
>e.printStackTrace();
> hbase-examples/src/main/java/org/apache/hadoop/hbase/mapreduce/SampleUploader.java:
> e.printStackTrace();
> hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/DemoClient.java:  
>   e.printStackTrace();
> hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/HttpDoAsClient.java:
>   e.printStackTrace();
> hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/HttpDoAsClient.java:
> e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java:
> e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java:
>   ioe.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java:
> ie.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCounter.java:
> e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java:
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HashTable.java:
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java:  
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java:  
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java:  
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java:
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TsvImporterMapper.java:
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TsvImporterTextMapper.java:
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java:
> e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java:
> e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
>   e.printStackTrace();
> hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALPrettyPrinter.java:
>   e.printStackTrace();
> hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java:
> e.printStackTrace();
> hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java:
> e.printStackTrace();
> hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java:
>   e.printStackTrace();
> hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java: 
>  e.printStackTrace();
> hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java:  
>   ex.printStackTrace();
> hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java:
>   e.printStackTrace();
> {code}
> The correct way of logging stack trace is to use the Logger instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-14 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580243#comment-16580243
 ] 

Ted Yu commented on HBASE-19008:


Please address checkstyle warnings in hbase-spark module:
{code}
./hbase-spark/src/main/java/org/apache/hadoop/hbase/spark/SparkSQLPushDownFilter.java:31:import
 java.util.*;: Using the '.*' form of import should be avoided - java.util.*. 
[AvoidStarImport]
{code}
There are some long lines to wrap.

> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-10.patch, 
> HBASE-19008-11.patch, HBASE-19008-2.patch, HBASE-19008-3.patch, 
> HBASE-19008-4.patch, HBASE-19008-5.patch, HBASE-19008-6.patch, 
> HBASE-19008-7.patch, HBASE-19008-8.patch, HBASE-19008-9.patch, 
> HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-353: Allow Users to Configure Multi-Streams Timestamp Synchronization Behavior

2018-08-14 Thread Ted Yu
+1
 Original message From: Bill Bejeck  Date: 
8/14/18  11:09 AM  (GMT-08:00) To: dev@kafka.apache.org Subject: Re: [VOTE] 
KIP-353: Allow Users to Configure Multi-Streams Timestamp Synchronization 
Behavior 
+1

Thanks,
Bill

On Thu, Aug 9, 2018 at 4:20 PM John Roesler  wrote:

> +1 non-binding
>
> On Thu, Aug 9, 2018 at 3:14 PM Matthias J. Sax 
> wrote:
>
> > +1 (binding)
> >
> > On 8/9/18 11:57 AM, Guozhang Wang wrote:
> > > Hello all,
> > >
> > > I would like to start the voting processing on the following KIP, to
> > allow
> > > users control when a task can be processed based on its buffered
> records,
> > > and how the stream time of a task be advanced.
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 353%3A+Improve+Kafka+Streams+Timestamp+Synchronization
> > >
> > >
> > >
> > > Thanks,
> > > -- Guozhang
> > >
> >
> >
>


[jira] [Commented] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-14 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580173#comment-16580173
 ] 

Ted Yu commented on HBASE-21042:


[~tdsilva]:
Is the patch good by you ?

[~chia7712]:
Please also take a look.

> processor.getRowsToLock() always assumes there is some row being locked in 
> HRegion#processRowsWithLocks
> ---
>
> Key: HBASE-21042
> URL: https://issues.apache.org/jira/browse/HBASE-21042
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 1.4.7
>
> Attachments: 21042.branch-1.txt
>
>
> [~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
> missed finally block of HRegion#processRowsWithLocks
> This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-356: Add withCachingDisabled() to StoreBuilder

2018-08-14 Thread Ted Yu
+1

On Tue, Aug 14, 2018 at 10:42 AM Guozhang Wang  wrote:

> Hello folks,
>
> I'd like to start a voting thread on the following KIP:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-356%3A+Add+withCachingDisabled%28%29+to+StoreBuilder
>
> It is a pretty straightforward one, adding a missing API to StoreBuilder
> which should actually be added at the very beginning but somehow was lost.
> Hence I skipped the DISCUSS process of it. But if you have any feedbacks
> please feel free to share as well.
>
>
>
> -- Guozhang
>


[jira] [Commented] (HBASE-21040) Replace call to printStackTrace() with proper logger call

2018-08-14 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579873#comment-16579873
 ] 

Ted Yu commented on HBASE-21040:


For ClientAsyncPrefetchScanner.java and RpcRetryingCallerWithReadReplicas.java, 
the logging can be removed since the exception is thrown back.

When InterruptedException is caught, please call 
Thread.currentThread().interrupt() to restore interrupt status.

Please also address the new checkstyle warnings.

> Replace call to printStackTrace() with proper logger call
> -
>
> Key: HBASE-21040
> URL: https://issues.apache.org/jira/browse/HBASE-21040
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: Subrat Mishra
>Priority: Minor
> Attachments: HBASE-21040.master.001.patch
>
>
> Here is related code:
> {code}
> hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/RestoreDriver.java: 
>  e.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientAsyncPrefetchScanner.java:
>   first.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java:
> t.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java:   
>e.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java:   
>e.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java:   
>e.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java:   
>e.printStackTrace();
> hbase-examples/src/main/java/org/apache/hadoop/hbase/mapreduce/SampleUploader.java:
> e.printStackTrace();
> hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/DemoClient.java:  
>   e.printStackTrace();
> hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/HttpDoAsClient.java:
>   e.printStackTrace();
> hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/HttpDoAsClient.java:
> e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java:
> e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java:
>   ioe.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java:
> ie.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCounter.java:
> e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java:
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HashTable.java:
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java:  
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java:  
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java:  
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java:
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TsvImporterMapper.java:
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TsvImporterTextMapper.java:
>   e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java:
> e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java:
> e.printStackTrace();
> hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
>   e.printStackTrace();
> hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALPrettyPrinter.java:
>   e.printStackTrace();
> hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java:
> e.printStackTrace();
> hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java:
> e.printStackTrace();
> hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java:
>   e.printStackTrace();
> hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java: 
>  e.printStackTrace();
> hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java:  
>   ex.printStackTrace();
> hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java:
>   e.printStackTrace();
> {code}
> The correct way of logging stack trace is to use the Logger instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: HBase 2.1.0 - RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure

2018-08-14 Thread Ted Yu
jcl:
I can see that DEBUG log wasn't turned on.

Can you set log4j to DEBUG level and see if there is more information ?

Cheers

On Tue, Aug 14, 2018 at 6:56 AM Allan Yang  wrote:

> Those log is not enough to locate the problem.
> Best Regards
> Allan Yang
>
>
> jcl <515951...@163.com> 于2018年8月14日周二 下午9:18写道:
>
> > Power off and restart(Hadoop and HBase), Master is initializing - Hbase
> > ServerManager: but crash processing already in progress
> >
> > command jps, HMaster and HRegionServer is live
> >
> >
> > WARN  [Thread-14] master.ServerManager: Expiration called on
> > hbase-115,16020,1534248994825 but crash processing already in progress
> > WARN  [Thread-14] master.ServerManager: Expiration called on
> > hbase-116,16020,1534248590107 but crash processing already in progress
> > WARN  [Thread-14] master.ServerManager: Expiration called on
> > hbase-115,16020,1534249077856 but crash processing already in progress
> > WARN  [Thread-14] master.ServerManager: Expiration called on
> > hbase-116,16020,1534248994045 but crash processing already in progress
> > WARN  [Thread-14] master.ServerManager: Expiration called on
> > hbase-115,16020,1534248708149 but crash processing already in progress
> > WARN  [Thread-14] master.ServerManager: Expiration called on
> > hbase-116,16020,1534248707381 but crash processing already in progress
> >
> >
> >
> > LOG:
> >
> > core file size (blocks, -c) 0
> > data seg size (kbytes, -d) unlimited
> > scheduling priority (-e) 0
> > file size (blocks, -f) unlimited
> > pending signals (-i) 64091
> > max locked memory (kbytes, -l) 64
> > max memory size (kbytes, -m) unlimited
> > open files (-n) 1024
> > pipe size (512 bytes, -p) 8
> > POSIX message queues (bytes, -q) 819200
> > real-time priority (-r) 0
> > stack size (kbytes, -s) 8192
> > cpu time (seconds, -t) unlimited
> > max user processes (-u) 64091
> > virtual memory (kbytes, -v) unlimited
> > file locks (-x) unlimited
> > 2018-08-14 17:25:00,173 INFO [main] master.HMaster: STARTING service
> > HMaster
> > 2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: HBase 2.1.0
> > 2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: Source code
> > repository revision=4531d1c947a25b28a9a994b60c791a112c12a2b4
> > 2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: Compiled by hbase
> on
> > Wed Aug 1 11:25:59 2018
> > 2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: From source with
> > checksum fc32566f7e030ff71458fbf6dc77bce9
> > 2018-08-14 17:25:00,516 INFO [main] util.ServerCommandLine:
> hbase.tmp.dir:
> > /tmp/hbase-root
> > 2018-08-14 17:25:00,516 INFO [main] util.ServerCommandLine:
> hbase.rootdir:
> > hdfs://192.168.101.114:9000/hbase
> > 2018-08-14  17:25:00,516
> > INFO [main] util.ServerCommandLine: hbase.cluster.distributed: true
> > 2018-08-14 17:25:00,516 INFO [main] util.ServerCommandLine:
> > hbase.zookeeper.quorum: 192.168.101.114:2181
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> >
> env:PATH=/opt/apache-phoenix-5.0.0-HBase-2.0-bin/bin:/opt/hbase-2.1.0/bin:/opt/hadoop-2.8.4/bin:/opt/jdk1.8.0_172/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:HADOOP_CONF_DIR=/opt/hadoop-2.8.4/etc/hadoop
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:HISTCONTROL=ignoredups
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> >
> env:JAVA_LIBRARY_PATH=/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:HBASE_REGIONSERVER_OPTS= -Xdebug -Xnoagent -Djava.compiler=NONE
> > -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:HBASE_CONF_DIR=/opt/hbase-2.1.0/conf
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:HDFS_DATANODE_SECURE_USER=root
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:MAIL=/var/spool/mail/root
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:PHOENIX_HOME=/opt/apache-phoenix-5.0.0-HBase-2.0-bin
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> >
> env:LD_LIBRARY_PATH=:/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:LOGNAME=root
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HBASE_REST_OPTS=
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:PWD=/opt/hbase-2.1.0/bin
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HADOOP_PREFIX=/opt/hadoop-2.8.4
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HADOOP_INSTALL=/opt/hadoop-2.8.4
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HBASE_ROOT_LOGGER=INFO,RFA
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > 

[jira] [Commented] (HBASE-21041) Memstore's heap size will be decreased to minus zero after flush

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579247#comment-16579247
 ] 

Ted Yu commented on HBASE-21041:


Looks good overall.
{code}
+// Record the MutableSegment' heap overhead when initialing
{code}
initialing -> typo



> Memstore's heap size will be decreased to minus zero after flush
> 
>
> Key: HBASE-21041
> URL: https://issues.apache.org/jira/browse/HBASE-21041
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21041.branch-2.0.001.patch, 
> HBASE-21041.branch-2.0.002.patch
>
>
> When creating an active mutable segment (MutableSegment) in memstore, 
> MutableSegment's deep overheap (208 bytes) was added to its heap size, but 
> not to the region's memstore's heap size. And so was the immutable 
> segment(CSLMImmutableSegment) which the mutable segment turned into 
> (additional 8 bytes ) later. So after one flush, the memstore's heapsize will 
> be decreased to -216 bytes, The minus number will accumulate after every 
> flush. 
> CompactingMemstore has this problem too.
> We need to record the overhead for CSLMImmutableSegment and MutableSegment to 
> the corresponding region's memstore size.
> For CellArrayImmutableSegment,  CellChunkImmutableSegment and 
> CompositeImmutableSegment , it is not necessary to do so, because inside 
> CompactingMemstore, the overheads are already be taken care of when transfer 
> a CSLMImmutableSegment into them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579191#comment-16579191
 ] 

Ted Yu commented on HBASE-19008:


[~liuml07]:
Can you see if your comments have been addressed ?

> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-10.patch, 
> HBASE-19008-2.patch, HBASE-19008-3.patch, HBASE-19008-4.patch, 
> HBASE-19008-5.patch, HBASE-19008-6.patch, HBASE-19008-7.patch, 
> HBASE-19008-8.patch, HBASE-19008-9.patch, HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579178#comment-16579178
 ] 

Ted Yu commented on HBASE-19008:


[~reidchan]:
Can you see if your comments have been addressed ?

> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-10.patch, 
> HBASE-19008-2.patch, HBASE-19008-3.patch, HBASE-19008-4.patch, 
> HBASE-19008-5.patch, HBASE-19008-6.patch, HBASE-19008-7.patch, 
> HBASE-19008-8.patch, HBASE-19008-9.patch, HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20968) list_procedures_test fails due to no matching regex

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579173#comment-16579173
 ] 

Ted Yu commented on HBASE-20968:


Verified locally that the fix allows list_procedures_test to pass.

> list_procedures_test fails due to no matching regex
> ---
>
> Key: HBASE-20968
> URL: https://issues.apache.org/jira/browse/HBASE-20968
> Project: HBase
>  Issue Type: Test
>    Reporter: Ted Yu
>    Assignee: Ted Yu
>Priority: Major
> Attachments: 20968.v2.txt, 
> org.apache.hadoop.hbase.client.TestShell-output.txt
>
>
> From test output against hadoop3:
> {code}
> 2018-07-28 12:04:24,838 DEBUG [Time-limited test] 
> procedure2.ProcedureExecutor(948): Stored pid=12, state=RUNNABLE, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.  
> ShellTestProcedure
> 2018-07-28 12:04:24,864 INFO  [RS-EventLoopGroup-1-3] 
> ipc.ServerRpcConnection(556): Connection from 172.18.128.12:46918, 
> version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth: SIMPLE), 
> service=MasterService
> 2018-07-28 12:04:24,900 DEBUG [Thread-114] master.MasterRpcServices(1157): 
> Checking to see if procedure is done pid=11
> ^[[38;5;196mF^[[0m
> ===
> Failure: 
> ^[[48;5;124;38;5;231;1mtest_list_procedures(Hbase::ListProceduresTest)^[[0m
> src/test/ruby/shell/list_procedures_test.rb:65:in `block in 
> test_list_procedures'
>  62: end
>  63:   end
>  64:
> ^[[48;5;124;38;5;231;1m  => 65:   assert_equal(1, matching_lines)^[[0m
>  66: end
>  67:   end
>  68: end
> <^[[48;5;34;38;5;231;1m1^[[0m> expected but was
> <^[[48;5;124;38;5;231;1m0^[[0m>
> ===
> ...
> 2018-07-28 12:04:25,374 INFO  [PEWorker-9] 
> procedure2.ProcedureExecutor(1316): Finished pid=12, state=SUCCESS, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.   
> ShellTestProcedure in 336msec
> {code}
> The completion of the ShellTestProcedure was after the assertion was raised.
> {code}
> def create_procedure_regexp(table_name)
>   regexp_string = '[0-9]+ .*ShellTestProcedure SUCCESS.*' \
> {code}
> The regex used by the test isn't found in test output either.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21042:
---
Attachment: 21042.branch-1.txt

> processor.getRowsToLock() always assumes there is some row being locked in 
> HRegion#processRowsWithLocks
> ---
>
> Key: HBASE-21042
> URL: https://issues.apache.org/jira/browse/HBASE-21042
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 1.4.7
>
> Attachments: 21042.branch-1.txt
>
>
> [~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
> missed finally block of HRegion#processRowsWithLocks
> This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21042:
---
Attachment: (was: 21042.branch-1.txt)

> processor.getRowsToLock() always assumes there is some row being locked in 
> HRegion#processRowsWithLocks
> ---
>
> Key: HBASE-21042
> URL: https://issues.apache.org/jira/browse/HBASE-21042
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 1.4.7
>
> Attachments: 21042.branch-1.txt
>
>
> [~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
> missed finally block of HRegion#processRowsWithLocks
> This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21042:
---
Attachment: 21042.branch-1.txt

> processor.getRowsToLock() always assumes there is some row being locked in 
> HRegion#processRowsWithLocks
> ---
>
> Key: HBASE-21042
> URL: https://issues.apache.org/jira/browse/HBASE-21042
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 1.4.7
>
> Attachments: 21042.branch-1.txt
>
>
> [~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
> missed finally block of HRegion#processRowsWithLocks
> This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21042:
---
Fix Version/s: 1.4.7

> processor.getRowsToLock() always assumes there is some row being locked in 
> HRegion#processRowsWithLocks
> ---
>
> Key: HBASE-21042
> URL: https://issues.apache.org/jira/browse/HBASE-21042
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 1.4.7
>
> Attachments: 21042.branch-1.txt
>
>
> [~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
> missed finally block of HRegion#processRowsWithLocks
> This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21042:
---
Status: Patch Available  (was: Open)

> processor.getRowsToLock() always assumes there is some row being locked in 
> HRegion#processRowsWithLocks
> ---
>
> Key: HBASE-21042
> URL: https://issues.apache.org/jira/browse/HBASE-21042
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 21042.branch-1.txt
>
>
> [~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
> missed finally block of HRegion#processRowsWithLocks
> This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18998) processor.getRowsToLock() always assumes there is some row being locked

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579123#comment-16579123
 ] 

Ted Yu commented on HBASE-18998:


Logged HBASE-21042 with patch coming soon.

> processor.getRowsToLock() always assumes there is some row being locked
> ---
>
> Key: HBASE-18998
> URL: https://issues.apache.org/jira/browse/HBASE-18998
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>    Assignee: Ted Yu
>Priority: Major
> Fix For: 1.4.0, 1.3.2, 1.2.7, 1.1.13, 2.0.0-alpha-4, 2.0.0
>
> Attachments: 18998.v1.txt
>
>
> During testing, we observed the following exception:
> {code}
> 2017-10-12 02:52:26,683|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|1/1  DROP TABLE 
> testTable;
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|17/10/12 02:52:30 WARN 
> ipc.CoprocessorRpcChannel: Call failed on IOException
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|org.apache.hadoop.hbase.DoNotRetryIOException:
>  org.apache.hadoop.hbase.DoNotRetryIOException: TESTTABLE: null
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:93)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1671)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14347)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7849)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|Caused by: 
> java.util.NoSuchElementException
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> java.util.Collections$EmptyIterator.next(Collections.java:4189)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:7137)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:6980)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateRowsWithLocks(MetaDataEndpointImpl.java:1966)
> 2017-10-12 02:52:30,323|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEn

[jira] [Created] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Ted Yu (JIRA)
Ted Yu created HBASE-21042:
--

 Summary: processor.getRowsToLock() always assumes there is some 
row being locked in HRegion#processRowsWithLocks
 Key: HBASE-21042
 URL: https://issues.apache.org/jira/browse/HBASE-21042
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


[~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
missed finally block of HRegion#processRowsWithLocks

This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Ted Yu (JIRA)
Ted Yu created HBASE-21042:
--

 Summary: processor.getRowsToLock() always assumes there is some 
row being locked in HRegion#processRowsWithLocks
 Key: HBASE-21042
 URL: https://issues.apache.org/jira/browse/HBASE-21042
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


[~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
missed finally block of HRegion#processRowsWithLocks

This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3095) Use ArrayDeque instead of LinkedList for queue implementation

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KYLIN-3095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KYLIN-3095:
--
Description: 
Use ArrayDeque instead of LinkedList for queue implementation where thread 
safety is not needed.

>From https://docs.oracle.com/javase/7/docs/api/java/util/ArrayDeque.html

{quote}
Resizable-array implementation of the Deque interface. Array deques have no 
capacity restrictions; they grow as necessary to support usage. They are not 
thread-safe; in the absence of external synchronization, they do not support 
concurrent access by multiple threads. Null elements are prohibited. This class 
is likely to be faster than Stack when used as a stack, and *faster than 
LinkedList when used as a queue.*
{quote}

  was:
Use ArrayDeque instead of LinkedList for queue implementation where thread 
safety is not needed.

>From https://docs.oracle.com/javase/7/docs/api/java/util/ArrayDeque.html
{quote}
Resizable-array implementation of the Deque interface. Array deques have no 
capacity restrictions; they grow as necessary to support usage. They are not 
thread-safe; in the absence of external synchronization, they do not support 
concurrent access by multiple threads. Null elements are prohibited. This class 
is likely to be faster than Stack when used as a stack, and *faster than 
LinkedList when used as a queue.*
{quote}


> Use ArrayDeque instead of LinkedList for queue implementation
> -
>
> Key: KYLIN-3095
> URL: https://issues.apache.org/jira/browse/KYLIN-3095
> Project: Kylin
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee:  Kaige Liu
>Priority: Minor
>  Labels: parallel
> Fix For: v2.5.0
>
>
> Use ArrayDeque instead of LinkedList for queue implementation where thread 
> safety is not needed.
> From https://docs.oracle.com/javase/7/docs/api/java/util/ArrayDeque.html
> {quote}
> Resizable-array implementation of the Deque interface. Array deques have no 
> capacity restrictions; they grow as necessary to support usage. They are not 
> thread-safe; in the absence of external synchronization, they do not support 
> concurrent access by multiple threads. Null elements are prohibited. This 
> class is likely to be faster than Stack when used as a stack, and *faster 
> than LinkedList when used as a queue.*
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3046) Consider introducing log4j-extras

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KYLIN-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KYLIN-3046:
--
Description: 
log4j-extras allows log rotation as well as compression.


https://logging.apache.org/log4j/extras/download.html

We should consider using log4j-extras.

  was:
log4j-extras allows log rotation as well as compression.

https://logging.apache.org/log4j/extras/download.html

We should consider using log4j-extras.


> Consider introducing log4j-extras 
> --
>
> Key: KYLIN-3046
> URL: https://issues.apache.org/jira/browse/KYLIN-3046
> Project: Kylin
>  Issue Type: Improvement
>    Reporter: Ted Yu
>Priority: Major
>  Labels: log
> Fix For: v2.5.0
>
>
> log4j-extras allows log rotation as well as compression.
> https://logging.apache.org/log4j/extras/download.html
> We should consider using log4j-extras.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: HDP3.0: failed to map segment from shared object: Operation not permitted

2018-08-13 Thread Ted Yu
Since you are using a vendor's distro, can you post on their user group ?
Cheers
 Original message From: Lian Jiang  
Date: 8/13/18  4:03 PM  (GMT-08:00) To: user@hbase.apache.org Subject: HDP3.0: 
failed to map segment from shared object: Operation not permitted 
Hi,

I installed hadoop cluster using HDP3.0 but hbase does not work due to this
error:

Caused by: java.lang.UnsatisfiedLinkError: failed to load the required
native library

    at
org.apache.hbase.thirdparty.io.netty.channel.epoll.Epoll.ensureAvailability(Epoll.java:81)

    at
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.(EpollEventLoop.java:55)

    at
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.newChild(EpollEventLoopGroup.java:134)

    at
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.newChild(EpollEventLoopGroup.java:35)

    at
org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:84)

    at
org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:58)

    at
org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:47)

    at
org.apache.hbase.thirdparty.io.netty.channel.MultithreadEventLoopGroup.(MultithreadEventLoopGroup.java:59)

    at
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.(EpollEventLoopGroup.java:104)

    at
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.(EpollEventLoopGroup.java:91)

    at
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.(EpollEventLoopGroup.java:68)

    at
org.apache.hadoop.hbase.util.NettyEventLoopGroupConfig.(NettyEventLoopGroupConfig.java:61)

    at
org.apache.hadoop.hbase.regionserver.HRegionServer.setupNetty(HRegionServer.java:673)

    at
org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:532)

    at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:472)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)

    at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

    at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

    at
org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2923)

    ... 5 more

Caused by: java.lang.UnsatisfiedLinkError:
/tmp/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_x86_644257058602762792223.so:
/tmp/liborg_apache_hbase_thirdparty_netty_transport_native_ep

oll_x86_644257058602762792223.so: failed to map segment from shared object:
Operation not permitted

    at java.lang.ClassLoader$NativeLibrary.load(Native Method)

    at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)

    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)

    at java.lang.Runtime.load0(Runtime.java:809)

    at java.lang.System.load(System.java:1086)

    at
org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:36)

    at
org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:243)

    at
org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:187)

    at
org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:207)

    at
org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.(Native.java:65)

    at
org.apache.hbase.thirdparty.io.netty.channel.epoll.Epoll.(Epoll.java:33)

    ... 24 more


Looks it is caused by nonexec permission of /tmp. To make hbase not use
/tmp, I added

{
  "hbase-site": {
    "properties": {
  "hbase.tmp.dir": "/u01/tmp"
    }
  }
    },

But this does not work and hbase still uses /tmp. Any idea? Appreciate any
help.


[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578912#comment-16578912
 ] 

Ted Yu commented on HBASE-19008:


Please fix 8 checkstyle warnings in hbase-server module, 3 in hbase-spark 
module and 1 javadoc warning in hbase-client module.

> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-2.patch, 
> HBASE-19008-3.patch, HBASE-19008-4.patch, HBASE-19008-5.patch, 
> HBASE-19008-6.patch, HBASE-19008-7.patch, HBASE-19008-8.patch, 
> HBASE-19008-9.patch, HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20257) hbase-spark should not depend on com.google.code.findbugs.jsr305

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20257:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Artem.

Thanks for the review, Sean.

> hbase-spark should not depend on com.google.code.findbugs.jsr305
> 
>
> Key: HBASE-20257
> URL: https://issues.apache.org/jira/browse/HBASE-20257
> Project: HBase
>  Issue Type: Task
>  Components: build, spark
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Artem Ervits
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-20257.v01.patch, HBASE-20257.v02.patch, 
> HBASE-20257.v03.patch, HBASE-20257.v04.patch, HBASE-20257.v05.patch, 
> HBASE-20257.v06.patch
>
>
> The following can be observed in the build output of master branch:
> {code}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.BannedDependencies failed 
> with message:
> We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321.
> Found Banned Dependency: com.google.code.findbugs:jsr305:jar:1.3.9
> Use 'mvn dependency:tree' to locate the source of the banned dependencies.
> {code}
> Here is related snippet from hbase-spark/pom.xml:
> {code}
> 
>   com.google.code.findbugs
>   jsr305
> {code}
> Dependency on jsr305 should be dropped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21040) Replace call to printStackTrace() with proper logger call

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21040:
---
Description: 
Here is related code:
{code}
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/RestoreDriver.java:   
   e.printStackTrace();
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientAsyncPrefetchScanner.java:
  first.printStackTrace();
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java:
t.printStackTrace();
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java: 
 e.printStackTrace();
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java: 
 e.printStackTrace();
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java: 
 e.printStackTrace();
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java: 
 e.printStackTrace();
hbase-examples/src/main/java/org/apache/hadoop/hbase/mapreduce/SampleUploader.java:
e.printStackTrace();
hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/DemoClient.java:
e.printStackTrace();
hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/HttpDoAsClient.java:
  e.printStackTrace();
hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/HttpDoAsClient.java:
e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java:
e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java:
  ioe.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java:
ie.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCounter.java:
e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java: 
 e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HashTable.java: 
 e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java:
e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java:
e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java:
e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java: 
 e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TsvImporterMapper.java:
  e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TsvImporterTextMapper.java:
  e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java: 
   e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java: 
   e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
  e.printStackTrace();
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALPrettyPrinter.java:
  e.printStackTrace();
hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java:  
  e.printStackTrace();
hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java:  
  e.printStackTrace();
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java:
  e.printStackTrace();
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java:   
   e.printStackTrace();
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java:
ex.printStackTrace();
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java:
  e.printStackTrace();
{code}
The correct way of logging stack trace is to use the Logger instance.

  was:
Here is related code:
{code}
} catch (Exception e) {
  e.printStackTrace();
{code}
The correct way of logging stack trace is to use the Logger instance.


> Replace call to printStackTrace() with proper logger call
> -
>
> Key: HBASE-21040
> URL: https://issues.apache.org/jira/browse/HBASE-21040
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Priority: Minor
>
> Here is related code:
> {code}
> hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/RestoreDriver.java: 
>  e.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientAsyncPrefetchScanner.java:
>   first.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java:
> t.printStackTrace();
> hbase-client/src/main/java/org/apache/hadoop/hbase/filt

[jira] [Updated] (HBASE-21040) Replace call to printStackTrace() with proper logger call

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21040:
---
Summary: Replace call to printStackTrace() with proper logger call  (was: 
printStackTrace() is used in RestoreDriver in case Exception is caught)

> Replace call to printStackTrace() with proper logger call
> -
>
> Key: HBASE-21040
> URL: https://issues.apache.org/jira/browse/HBASE-21040
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Priority: Minor
>
> Here is related code:
> {code}
> } catch (Exception e) {
>   e.printStackTrace();
> {code}
> The correct way of logging stack trace is to use the Logger instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21040) printStackTrace() is used in RestoreDriver in case Exception is caught

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577906#comment-16577906
 ] 

Ted Yu edited comment on HBASE-21040 at 8/13/18 4:32 PM:
-

Should be fine.

Can you look at other similar calls in non-test code ?
{code}
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/RestoreDriver.java:   
   e.printStackTrace();
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientAsyncPrefetchScanner.java:
  first.printStackTrace();
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java:
t.printStackTrace();
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java: 
 e.printStackTrace();
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java: 
 e.printStackTrace();
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java: 
 e.printStackTrace();
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java: 
 e.printStackTrace();
hbase-examples/src/main/java/org/apache/hadoop/hbase/mapreduce/SampleUploader.java:
e.printStackTrace();
hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/DemoClient.java:
e.printStackTrace();
hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/HttpDoAsClient.java:
  e.printStackTrace();
hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/HttpDoAsClient.java:
e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java:
e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java:
  ioe.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java:
ie.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCounter.java:
e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java: 
 e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HashTable.java: 
 e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java:
e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java:
e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java:
e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java: 
 e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TsvImporterMapper.java:
  e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TsvImporterTextMapper.java:
  e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java: 
   e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java: 
   e.printStackTrace();
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
  e.printStackTrace();
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALPrettyPrinter.java:
  e.printStackTrace();
hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java:  
  e.printStackTrace();
hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java:  
  e.printStackTrace();
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java:
  e.printStackTrace();
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java:   
   e.printStackTrace();
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java:
ex.printStackTrace();
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java:
  e.printStackTrace();
{code}


was (Author: yuzhih...@gmail.com):
Should be fine.

Can you look at similar calls in non-test code ?

> printStackTrace() is used in RestoreDriver in case Exception is caught
> --
>
> Key: HBASE-21040
> URL: https://issues.apache.org/jira/browse/HBASE-21040
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Priority: Minor
>
> Here is related code:
> {code}
> } catch (Exception e) {
>   e.printStackTrace();
> {code}
> The correct way of logging stack trace is to use the Logger instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578406#comment-16578406
 ] 

Ted Yu commented on HBASE-20734:


Agree with implementing existence check per region.

> Colocate recovered edits directory with hbase.wal.dir
> -
>
> Key: HBASE-20734
> URL: https://issues.apache.org/jira/browse/HBASE-20734
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, Recovery, wal
>    Reporter: Ted Yu
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20734.branch-1.001.patch
>
>
> During investigation of HBASE-20723, I realized that we wouldn't get the best 
> performance when hbase.wal.dir is configured to be on different (fast) media 
> than hbase rootdir w.r.t. recovered edits since recovered edits directory is 
> currently under rootdir.
> Such setup may not result in fast recovery when there is region server 
> failover.
> This issue is to find proper (hopefully backward compatible) way in 
> colocating recovered edits directory with hbase.wal.dir .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21040) printStackTrace() is used in RestoreDriver in case Exception is caught

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577906#comment-16577906
 ] 

Ted Yu commented on HBASE-21040:


Should be fine.

Can you look at similar calls in non-test code ?

> printStackTrace() is used in RestoreDriver in case Exception is caught
> --
>
> Key: HBASE-21040
> URL: https://issues.apache.org/jira/browse/HBASE-21040
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Priority: Minor
>
> Here is related code:
> {code}
> } catch (Exception e) {
>   e.printStackTrace();
> {code}
> The correct way of logging stack trace is to use the Logger instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21040) printStackTrace() is used in RestoreDriver in case Exception is caught

2018-08-12 Thread Ted Yu (JIRA)
Ted Yu created HBASE-21040:
--

 Summary: printStackTrace() is used in RestoreDriver in case 
Exception is caught
 Key: HBASE-21040
 URL: https://issues.apache.org/jira/browse/HBASE-21040
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


Here is related code:
{code}
} catch (Exception e) {
  e.printStackTrace();
{code}
The correct way of logging stack trace is to use the Logger instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21040) printStackTrace() is used in RestoreDriver in case Exception is caught

2018-08-12 Thread Ted Yu (JIRA)
Ted Yu created HBASE-21040:
--

 Summary: printStackTrace() is used in RestoreDriver in case 
Exception is caught
 Key: HBASE-21040
 URL: https://issues.apache.org/jira/browse/HBASE-21040
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


Here is related code:
{code}
} catch (Exception e) {
  e.printStackTrace();
{code}
The correct way of logging stack trace is to use the Logger instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21038) SAXParseException when hbase.spark.use.hbasecontext=false

2018-08-12 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21038:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Ajith

> SAXParseException when hbase.spark.use.hbasecontext=false
> -
>
> Key: HBASE-21038
> URL: https://issues.apache.org/jira/browse/HBASE-21038
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-21038.01.patch, HBASE-21038.02.patch
>
>
> {{org.xml.sax.SAXParseException; systemId: 
> [file:/home/qwerty/hbase/hbase/hbase-spark/target/test-classes/|file:///home/root1/hbase/hbase/hbase-spark/target/test-classes/];
>  lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.}}
>  {{ java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: 
> [file:/home/qwerty/hbase/hbase/hbase-spark/target/test-classes/|file:///home/root1/hbase/hbase/hbase-spark/target/test-classes/];
>  lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2728)}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2574)}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2477)}}
>  \{{ at org.apache.hadoop.conf.Configuration.get(Configuration.java:1047)}}
>  \{{ at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007)}}
>  \{{ at org.apache.hadoop.mapred.JobConf.(JobConf.java:449)}}
>  \{{ at org.apache.hadoop.mapreduce.Job.getInstance(Job.java:184)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.HBaseContext.(HBaseContext.scala:71)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.HBaseRelation.(DefaultSource.scala:136)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:79)}}
> This is because when hbase.spark.use.hbasecontext=false, 
> org.apache.hadoop.hbase.spark.HBaseRelation will try to create a HbaseContext 
> with org.apache.hadoop.hbase.spark.HBaseRelation#configResources. But if 
> org.apache.hadoop.hbase.spark.datasources.HBaseSparkConf#HBASE_CONFIG_LOCATION
>  is not set, tdefault is "" (empty string). 
> org.apache.hadoop.conf.Configuration#getResource(String) will return current 
> folder as URL due to classLoader.getResource(""), which will fail to be 
> parsed as a file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21038) SAXParseException when hbase.spark.use.hbasecontext=false

2018-08-12 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577648#comment-16577648
 ] 

Ted Yu commented on HBASE-21038:


>From https://builds.apache.org/job/PreCommit-HBASE-Build/14014/console :

15:33:14 +1 overall
15:33:14 
15:33:14 | Vote |  Subsystem |  Runtime   | Comment
15:33:14 

15:33:14 |   0  |reexec  |   0m 13s   | Docker mode activated. 
15:33:14 |  ||| Prechecks 
15:33:14 |  +1  |   @author  |   0m  0s   | The patch does not contain any 
@author 
15:33:14 |  ||| tags.
15:33:14 |  +1  |test4tests  |   0m  0s   | The patch appears to include 1 
new or 
15:33:14 |  ||| modified test files.
15:33:14 |  ||| master Compile Tests 
15:33:14 |  +1  |mvninstall  |   6m  3s   | master passed 
15:33:14 |  +1  |   compile  |   1m 24s   | master passed 
15:33:14 |  +1  |  scaladoc  |   0m 42s   | master passed 
15:33:14 |  ||| Patch Compile Tests 
15:33:14 |  +1  |   compile  |   1m 26s   | the patch passed 
15:33:14 |  +1  |scalac  |   1m 26s   | the patch passed 
15:33:14 |  +1  |whitespace  |   0m  0s   | The patch has no whitespace 
issues. 
15:33:14 |  +1  |  scaladoc  |   0m 39s   | the patch passed 
15:33:14 |  ||| Other Tests 
15:33:14 |  +1  |  unit  |   4m 57s   | hbase-spark in the patch 
passed. 
15:33:14 |  +1  |asflicense  |   0m  9s   | The patch does not generate ASF 
License 
15:33:14 |  ||| warnings.
15:33:14 |  ||  15m 49s   | 

> SAXParseException when hbase.spark.use.hbasecontext=false
> -
>
> Key: HBASE-21038
> URL: https://issues.apache.org/jira/browse/HBASE-21038
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Major
> Attachments: HBASE-21038.01.patch, HBASE-21038.02.patch
>
>
> {{org.xml.sax.SAXParseException; systemId: 
> [file:/home/qwerty/hbase/hbase/hbase-spark/target/test-classes/|file:///home/root1/hbase/hbase/hbase-spark/target/test-classes/];
>  lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.}}
>  {{ java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: 
> [file:/home/qwerty/hbase/hbase/hbase-spark/target/test-classes/|file:///home/root1/hbase/hbase/hbase-spark/target/test-classes/];
>  lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2728)}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2574)}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2477)}}
>  \{{ at org.apache.hadoop.conf.Configuration.get(Configuration.java:1047)}}
>  \{{ at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007)}}
>  \{{ at org.apache.hadoop.mapred.JobConf.(JobConf.java:449)}}
>  \{{ at org.apache.hadoop.mapreduce.Job.getInstance(Job.java:184)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.HBaseContext.(HBaseContext.scala:71)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.HBaseRelation.(DefaultSource.scala:136)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:79)}}
> This is because when hbase.spark.use.hbasecontext=false, 
> org.apache.hadoop.hbase.spark.HBaseRelation will try to create a HbaseContext 
> with org.apache.hadoop.hbase.spark.HBaseRelation#configResources. But if 
> org.apache.hadoop.hbase.spark.datasources.HBaseSparkConf#HBASE_CONFIG_LOCATION
>  is not set, tdefault is "" (empty string). 
> org.apache.hadoop.conf.Configuration#getResource(String) will return current 
> folder as URL due to classLoader.getResource(""), which will fail to be 
> parsed as a file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-21038) SAXParseException when hbase.spark.use.hbasecontext=false

2018-08-12 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-21038:
--

Assignee: Ajith S

> SAXParseException when hbase.spark.use.hbasecontext=false
> -
>
> Key: HBASE-21038
> URL: https://issues.apache.org/jira/browse/HBASE-21038
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Major
> Attachments: HBASE-21038.01.patch, HBASE-21038.02.patch
>
>
> {{org.xml.sax.SAXParseException; systemId: 
> [file:/home/qwerty/hbase/hbase/hbase-spark/target/test-classes/|file:///home/root1/hbase/hbase/hbase-spark/target/test-classes/];
>  lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.}}
>  {{ java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: 
> [file:/home/qwerty/hbase/hbase/hbase-spark/target/test-classes/|file:///home/root1/hbase/hbase/hbase-spark/target/test-classes/];
>  lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2728)}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2574)}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2477)}}
>  \{{ at org.apache.hadoop.conf.Configuration.get(Configuration.java:1047)}}
>  \{{ at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007)}}
>  \{{ at org.apache.hadoop.mapred.JobConf.(JobConf.java:449)}}
>  \{{ at org.apache.hadoop.mapreduce.Job.getInstance(Job.java:184)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.HBaseContext.(HBaseContext.scala:71)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.HBaseRelation.(DefaultSource.scala:136)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:79)}}
> This is because when hbase.spark.use.hbasecontext=false, 
> org.apache.hadoop.hbase.spark.HBaseRelation will try to create a HbaseContext 
> with org.apache.hadoop.hbase.spark.HBaseRelation#configResources. But if 
> org.apache.hadoop.hbase.spark.datasources.HBaseSparkConf#HBASE_CONFIG_LOCATION
>  is not set, tdefault is "" (empty string). 
> org.apache.hadoop.conf.Configuration#getResource(String) will return current 
> folder as URL due to classLoader.getResource(""), which will fail to be 
> parsed as a file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21038) SAXParseException when hbase.spark.use.hbasecontext=false

2018-08-12 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21038:
---
Status: Patch Available  (was: Open)

> SAXParseException when hbase.spark.use.hbasecontext=false
> -
>
> Key: HBASE-21038
> URL: https://issues.apache.org/jira/browse/HBASE-21038
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajith S
>Priority: Major
> Attachments: HBASE-21038.01.patch, HBASE-21038.02.patch
>
>
> {{org.xml.sax.SAXParseException; systemId: 
> [file:/home/qwerty/hbase/hbase/hbase-spark/target/test-classes/|file:///home/root1/hbase/hbase/hbase-spark/target/test-classes/];
>  lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.}}
>  {{ java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: 
> [file:/home/qwerty/hbase/hbase/hbase-spark/target/test-classes/|file:///home/root1/hbase/hbase/hbase-spark/target/test-classes/];
>  lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2728)}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2574)}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2477)}}
>  \{{ at org.apache.hadoop.conf.Configuration.get(Configuration.java:1047)}}
>  \{{ at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007)}}
>  \{{ at org.apache.hadoop.mapred.JobConf.(JobConf.java:449)}}
>  \{{ at org.apache.hadoop.mapreduce.Job.getInstance(Job.java:184)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.HBaseContext.(HBaseContext.scala:71)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.HBaseRelation.(DefaultSource.scala:136)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:79)}}
> This is because when hbase.spark.use.hbasecontext=false, 
> org.apache.hadoop.hbase.spark.HBaseRelation will try to create a HbaseContext 
> with org.apache.hadoop.hbase.spark.HBaseRelation#configResources. But if 
> org.apache.hadoop.hbase.spark.datasources.HBaseSparkConf#HBASE_CONFIG_LOCATION
>  is not set, tdefault is "" (empty string). 
> org.apache.hadoop.conf.Configuration#getResource(String) will return current 
> folder as URL due to classLoader.getResource(""), which will fail to be 
> parsed as a file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21031) Memory leak if replay edits failed during region opening

2018-08-12 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577592#comment-16577592
 ] 

Ted Yu commented on HBASE-21031:


Looks good overall. 

> Memory leak if replay edits failed during region opening
> 
>
> Key: HBASE-21031
> URL: https://issues.apache.org/jira/browse/HBASE-21031
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21031.branch-2.0.001.patch, 
> HBASE-21031.branch-2.0.002.patch, HBASE-21031.branch-2.0.003.patch, 
> HBASE-21031.branch-2.0.004.patch, memoryleak.png
>
>
> Due to HBASE-21029, when replaying edits with a lot of same cells, the 
> memstore won't flush,  a exception will throw when all heap space was used:
> {code}
> 2018-08-06 15:52:27,590 ERROR 
> [RS_OPEN_REGION-regionserver/hb-bp10cw4ejoy0a2f3f-009:16020-2] 
> handler.OpenRegionHandler(302): Failed open of 
> region=hbase_test,dffa78,1531227033378.cbf9a2daf3aaa0c7e931e9c9a7b53f41., 
> starting to roll back the global memstore size.
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at 
> org.apache.hadoop.hbase.regionserver.OnheapChunk.allocateDataBuffer(OnheapChunk.java:41)
> at org.apache.hadoop.hbase.regionserver.Chunk.init(Chunk.java:104)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:180)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:163)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.getOrMakeChunk(MemStoreLABImpl.java:273)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:148)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:111)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.maybeCloneWithAllocator(Segment.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.maybeCloneWithAllocator(AbstractMemStore.java:287)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.add(AbstractMemStore.java:107)
> at org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:706)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.restoreEdit(HRegion.java:5494)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4608)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4404)
> {code}
> After this exception, the memstore did not roll back, and since MSLAB is 
> used, all the chunk allocated won't release for ever. Those memory is leak 
> forever...
> We need to rollback the memory if open region fails(For now, only global 
> memstore size is decreased after failure).
> Another problem is that we use replayEditsPerRegion in RegionServerAccounting 
> to record how many memory used during replaying. And decrease the global 
> memstore size if replay fails. This is not right, since during replaying, we 
> may also flush the memstore, the size in the map of replayEditsPerRegion is 
> not accurate at all! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21038) SAXParseException when hbase.spark.use.hbasecontext=false

2018-08-12 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577515#comment-16577515
 ] 

Ted Yu commented on HBASE-21038:


is it possible to add a test showing the problem without the fix ?

> SAXParseException when hbase.spark.use.hbasecontext=false
> -
>
> Key: HBASE-21038
> URL: https://issues.apache.org/jira/browse/HBASE-21038
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajith S
>Priority: Major
> Attachments: HBASE-21038.01.patch
>
>
> {{org.xml.sax.SAXParseException; systemId: 
> [file:/home/qwerty/hbase/hbase/hbase-spark/target/test-classes/|file:///home/root1/hbase/hbase/hbase-spark/target/test-classes/];
>  lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.}}
>  {{ java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: 
> [file:/home/qwerty/hbase/hbase/hbase-spark/target/test-classes/|file:///home/root1/hbase/hbase/hbase-spark/target/test-classes/];
>  lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2728)}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2574)}}
>  \{{ at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2477)}}
>  \{{ at org.apache.hadoop.conf.Configuration.get(Configuration.java:1047)}}
>  \{{ at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007)}}
>  \{{ at org.apache.hadoop.mapred.JobConf.(JobConf.java:449)}}
>  \{{ at org.apache.hadoop.mapreduce.Job.getInstance(Job.java:184)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.HBaseContext.(HBaseContext.scala:71)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.HBaseRelation.(DefaultSource.scala:136)}}
>  \{{ at 
> org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:79)}}
> This is because when hbase.spark.use.hbasecontext=false, 
> org.apache.hadoop.hbase.spark.HBaseRelation will try to create a HbaseContext 
> with org.apache.hadoop.hbase.spark.HBaseRelation#configResources. But if 
> org.apache.hadoop.hbase.spark.datasources.HBaseSparkConf#HBASE_CONFIG_LOCATION
>  is not set, tdefault is "" (empty string). 
> org.apache.hadoop.conf.Configuration#getResource(String) will return current 
> folder as URL due to classLoader.getResource(""), which will fail to be 
> parsed as a file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3317) Replace UUID.randomUUID with deterministic PRNG

2018-08-11 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KYLIN-3317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KYLIN-3317:
--
Description: 
Currently UUID.randomUUID is called in various places in the code base.
* It is non-deterministic.
* It uses a single secure random for UUID generation. This uses a single JVM 
wide lock, and this can lead to lock contention and other performance problems.

We should move to something that is deterministic by using seeded PRNGs
{code}
new UUID(ThreadLocalRandom.current().nextLong(), 
ThreadLocalRandom.current().nextLong())
{code}

  was:
Currently UUID.randomUUID is called in various places in the code base.

* It is non-deterministic.
* It uses a single secure random for UUID generation. This uses a single JVM 
wide lock, and this can lead to lock contention and other performance problems.

We should move to something that is deterministic by using seeded PRNGs
{code}
new UUID(ThreadLocalRandom.current().nextLong(), 
ThreadLocalRandom.current().nextLong())
{code}


> Replace UUID.randomUUID with deterministic PRNG
> ---
>
> Key: KYLIN-3317
> URL: https://issues.apache.org/jira/browse/KYLIN-3317
> Project: Kylin
>  Issue Type: Task
>  Components: Security
>    Reporter: Ted Yu
>Assignee: jiatao.tao
>Priority: Major
> Fix For: v2.5.0
>
>
> Currently UUID.randomUUID is called in various places in the code base.
> * It is non-deterministic.
> * It uses a single secure random for UUID generation. This uses a single JVM 
> wide lock, and this can lead to lock contention and other performance 
> problems.
> We should move to something that is deterministic by using seeded PRNGs
> {code}
> new UUID(ThreadLocalRandom.current().nextLong(), 
> ThreadLocalRandom.current().nextLong())
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19257) Document tool to dump information from MasterProcWALs file

2018-08-11 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19257:
---
Labels: document  (was: )

> Document tool to dump information from MasterProcWALs file
> --
>
> Key: HBASE-19257
> URL: https://issues.apache.org/jira/browse/HBASE-19257
> Project: HBase
>  Issue Type: Improvement
>    Reporter: Ted Yu
>Priority: Major
>  Labels: document
>
> I was troubleshooting customer case where high number of files piled up under 
> MasterProcWALs directory.
> Gaining insight into (sample) file from MasterProcWALs dir would help find 
> the root cause.
> This JIRA is to document ProcedureWALPrettyPrinter which reads proc wal file 
> and prints (selected) information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-347: Enable batching in FindCoordinatorRequest

2018-08-10 Thread Ted Yu
bq. this is the foundation of some later possible optimizations(enable
batching in *describeConsumerGroups ...*

*Can you say more why this change lays the foundation for the future
optimizations ?*

*You mentioned **FIND_COORDINATOR_REQUEST_V3 in the wiki but I don't see it
in PR.*
*I assume you would add that later.*

*Please read your wiki and fix grammatical error such as the following:*

bq. that need to be make

Thanks

On Wed, Aug 8, 2018 at 3:55 PM Yishun Guan  wrote:

> Hi all,
>
> I would like to start a discussion on:
>
> KIP-347: Enable batching in FindCoordinatorRequest
> https://cwiki.apache.org/confluence/x/CgZPBQ
>
> Thanks @Guozhang Wang  for his help and patience!
>
> Thanks,
> Yishun
>


[jira] [Created] (FLINK-10125) Unclosed ByteArrayDataOutputView in RocksDBMapState

2018-08-10 Thread Ted Yu (JIRA)
Ted Yu created FLINK-10125:
--

 Summary: Unclosed ByteArrayDataOutputView in RocksDBMapState
 Key: FLINK-10125
 URL: https://issues.apache.org/jira/browse/FLINK-10125
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu


{code}
  ByteArrayDataOutputView dov = new ByteArrayDataOutputView(1);
{code}
dov is used in a try block but it is not closed in case of Exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10125) Unclosed ByteArrayDataOutputView in RocksDBMapState

2018-08-10 Thread Ted Yu (JIRA)
Ted Yu created FLINK-10125:
--

 Summary: Unclosed ByteArrayDataOutputView in RocksDBMapState
 Key: FLINK-10125
 URL: https://issues.apache.org/jira/browse/FLINK-10125
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu


{code}
  ByteArrayDataOutputView dov = new ByteArrayDataOutputView(1);
{code}
dov is used in a try block but it is not closed in case of Exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20917) MetaTableMetrics#stop references uninitialized requestsMap for non-meta region

2018-08-10 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16576643#comment-16576643
 ] 

Ted Yu commented on HBASE-20917:


[~xucang]:
Can you take a look at the addendum ?

thanks

> MetaTableMetrics#stop references uninitialized requestsMap for non-meta region
> --
>
> Key: HBASE-20917
> URL: https://issues.apache.org/jira/browse/HBASE-20917
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>    Assignee: Ted Yu
>Priority: Major
> Fix For: 1.5.0, 1.4.6, 2.2.0
>
> Attachments: 20917.addendum, 20917.v1.txt, 20917.v2.txt
>
>
> I noticed the following in test output:
> {code}
> 2018-07-21 15:54:43,181 ERROR [RS_CLOSE_REGION-regionserver/172.17.5.4:0-1] 
> executor.EventHandler(186): Caught throwable while processing event 
> M_RS_CLOSE_REGION
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.coprocessor.MetaTableMetrics.stop(MetaTableMetrics.java:329)
>   at 
> org.apache.hadoop.hbase.coprocessor.BaseEnvironment.shutdown(BaseEnvironment.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionEnvironment.shutdown(RegionCoprocessorHost.java:165)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.shutdown(CoprocessorHost.java:290)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$4.postEnvCall(RegionCoprocessorHost.java:559)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:622)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postClose(RegionCoprocessorHost.java:551)
>   at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1678)
>   at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1484)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:104)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> {code}
> {{requestsMap}} is only initialized for the meta region.
> However, check for meta region is absent in the stop method:
> {code}
>   public void stop(CoprocessorEnvironment e) throws IOException {
> // since meta region can move around, clear stale metrics when stop.
> for (String meterName : requestsMap.keySet()) {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20917) MetaTableMetrics#stop references uninitialized requestsMap for non-meta region

2018-08-10 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20917:
---
Attachment: 20917.addendum

> MetaTableMetrics#stop references uninitialized requestsMap for non-meta region
> --
>
> Key: HBASE-20917
> URL: https://issues.apache.org/jira/browse/HBASE-20917
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>    Assignee: Ted Yu
>Priority: Major
> Fix For: 1.5.0, 1.4.6, 2.2.0
>
> Attachments: 20917.addendum, 20917.v1.txt, 20917.v2.txt
>
>
> I noticed the following in test output:
> {code}
> 2018-07-21 15:54:43,181 ERROR [RS_CLOSE_REGION-regionserver/172.17.5.4:0-1] 
> executor.EventHandler(186): Caught throwable while processing event 
> M_RS_CLOSE_REGION
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.coprocessor.MetaTableMetrics.stop(MetaTableMetrics.java:329)
>   at 
> org.apache.hadoop.hbase.coprocessor.BaseEnvironment.shutdown(BaseEnvironment.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionEnvironment.shutdown(RegionCoprocessorHost.java:165)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.shutdown(CoprocessorHost.java:290)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$4.postEnvCall(RegionCoprocessorHost.java:559)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:622)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postClose(RegionCoprocessorHost.java:551)
>   at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1678)
>   at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1484)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:104)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> {code}
> {{requestsMap}} is only initialized for the meta region.
> However, check for meta region is absent in the stop method:
> {code}
>   public void stop(CoprocessorEnvironment e) throws IOException {
> // since meta region can move around, clear stale metrics when stop.
> for (String meterName : requestsMap.keySet()) {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7276) Consider using re2j to speed up regex operations

2018-08-10 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-7276:
-

 Summary: Consider using re2j to speed up regex operations
 Key: KAFKA-7276
 URL: https://issues.apache.org/jira/browse/KAFKA-7276
 Project: Kafka
  Issue Type: Task
Reporter: Ted Yu


https://github.com/google/re2j

re2j claims to do linear time regular expression matching in Java.

Its benefit is most obvious for deeply nested regex (such as a | b | c | d).

We should consider using re2j to speed up regex operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7276) Consider using re2j to speed up regex operations

2018-08-10 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-7276:
-

 Summary: Consider using re2j to speed up regex operations
 Key: KAFKA-7276
 URL: https://issues.apache.org/jira/browse/KAFKA-7276
 Project: Kafka
  Issue Type: Task
Reporter: Ted Yu


https://github.com/google/re2j

re2j claims to do linear time regular expression matching in Java.

Its benefit is most obvious for deeply nested regex (such as a | b | c | d).

We should consider using re2j to speed up regex operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21031) Memory leak if replay edits failed during region opening

2018-08-10 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16576494#comment-16576494
 ] 

Ted Yu commented on HBASE-21031:


Can you come up with a test which fails if {{dropMemStoreContents}} only rolls 
back single region (the region which encounters Throwable) ?

Thanks

> Memory leak if replay edits failed during region opening
> 
>
> Key: HBASE-21031
> URL: https://issues.apache.org/jira/browse/HBASE-21031
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21031.branch-2.0.001.patch, 
> HBASE-21031.branch-2.0.002.patch, HBASE-21031.branch-2.0.003.patch, 
> memoryleak.png
>
>
> Due to HBASE-21029, when replaying edits with a lot of same cells, the 
> memstore won't flush,  a exception will throw when all heap space was used:
> {code}
> 2018-08-06 15:52:27,590 ERROR 
> [RS_OPEN_REGION-regionserver/hb-bp10cw4ejoy0a2f3f-009:16020-2] 
> handler.OpenRegionHandler(302): Failed open of 
> region=hbase_test,dffa78,1531227033378.cbf9a2daf3aaa0c7e931e9c9a7b53f41., 
> starting to roll back the global memstore size.
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at 
> org.apache.hadoop.hbase.regionserver.OnheapChunk.allocateDataBuffer(OnheapChunk.java:41)
> at org.apache.hadoop.hbase.regionserver.Chunk.init(Chunk.java:104)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:180)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:163)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.getOrMakeChunk(MemStoreLABImpl.java:273)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:148)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:111)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.maybeCloneWithAllocator(Segment.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.maybeCloneWithAllocator(AbstractMemStore.java:287)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.add(AbstractMemStore.java:107)
> at org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:706)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.restoreEdit(HRegion.java:5494)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4608)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4404)
> {code}
> After this exception, the memstore did not roll back, and since MSLAB is 
> used, all the chunk allocated won't release for ever. Those memory is leak 
> forever...
> We need to rollback the memory if open region fails(For now, only global 
> memstore size is decreased after failure).
> Another problem is that we use replayEditsPerRegion in RegionServerAccounting 
> to record how many memory used during replaying. And decrease the global 
> memstore size if replay fails. This is not right, since during replaying, we 
> may also flush the memstore, the size in the map of replayEditsPerRegion is 
> not accurate at all! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21029) Miscount of memstore's heap/offheap size if same cell was put

2018-08-10 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16576479#comment-16576479
 ] 

Ted Yu edited comment on HBASE-21029 at 8/10/18 3:59 PM:
-

I applied the test change only in master branch and ran:

-Dtest=TestDefaultMemStore#testPutSameCell

The test passed.
Can you come up with test which fails without change to Segment.java ?

Thanks


was (Author: yuzhih...@gmail.com):
I applied the test change only and ran:

-Dtest=TestDefaultMemStore#testPutSameCell

The test passed.
Can you come up with test which fails without change to Segment.java ?

Thanks

> Miscount of memstore's heap/offheap size if same cell was put
> -
>
> Key: HBASE-21029
> URL: https://issues.apache.org/jira/browse/HBASE-21029
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21029.branch-2.0.001.patch, 
> HBASE-21029.branch-2.0.002.patch
>
>
> We are now using memstore.heapSize() + memstore.offheapSize() to decide 
> whether a flush is needed. But, if a same cell was put again in memstore, 
> only the memstore's dataSize will be increased, the heap/offheap size won't. 
> We encountered the problem that a user was putting the same kv again and 
> again, but the memstore won't flush since the heap size was not counted 
> properly. The RS was killed by system since not enough memory in the end.
> Actually, if MSLAB is used, the heap/offheap will increase no matter the cell 
> is added or not. IIRC, memstore's heap/offheap size should always bigger than 
> data size. We introduced heap/offheap size besides data size in order to 
> reflect memory footprint more precisely. 
> {code}
> // If there's already a same cell in the CellSet and we are using MSLAB, 
> we must count in the
> // MSLAB allocation size as well, or else there will be memory leak 
> (occupied heap size larger
> // than the counted number)
> if (succ || mslabUsed) {
>   cellSize = getCellLength(cellToAdd);
> }
> // heap/offheap size is changed only if the cell is truly added in the 
> cellSet
> long heapSize = heapSizeChange(cellToAdd, succ);
> long offHeapSize = offHeapSizeChange(cellToAdd, succ);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21029) Miscount of memstore's heap/offheap size if same cell was put

2018-08-10 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16576479#comment-16576479
 ] 

Ted Yu commented on HBASE-21029:


I applied the test change only and ran:

-Dtest=TestDefaultMemStore#testPutSameCell

The test passed.
Can you come up with test which fails without change to Segment.java ?

Thanks

> Miscount of memstore's heap/offheap size if same cell was put
> -
>
> Key: HBASE-21029
> URL: https://issues.apache.org/jira/browse/HBASE-21029
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21029.branch-2.0.001.patch, 
> HBASE-21029.branch-2.0.002.patch
>
>
> We are now using memstore.heapSize() + memstore.offheapSize() to decide 
> whether a flush is needed. But, if a same cell was put again in memstore, 
> only the memstore's dataSize will be increased, the heap/offheap size won't. 
> We encountered the problem that a user was putting the same kv again and 
> again, but the memstore won't flush since the heap size was not counted 
> properly. The RS was killed by system since not enough memory in the end.
> Actually, if MSLAB is used, the heap/offheap will increase no matter the cell 
> is added or not. IIRC, memstore's heap/offheap size should always bigger than 
> data size. We introduced heap/offheap size besides data size in order to 
> reflect memory footprint more precisely. 
> {code}
> // If there's already a same cell in the CellSet and we are using MSLAB, 
> we must count in the
> // MSLAB allocation size as well, or else there will be memory leak 
> (occupied heap size larger
> // than the counted number)
> if (succ || mslabUsed) {
>   cellSize = getCellLength(cellToAdd);
> }
> // heap/offheap size is changed only if the cell is truly added in the 
> cellSet
> long heapSize = heapSizeChange(cellToAdd, succ);
> long offHeapSize = offHeapSizeChange(cellToAdd, succ);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21031) Memory leak if replay edits failed during region opening

2018-08-10 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16576466#comment-16576466
 ] 

Ted Yu commented on HBASE-21031:


I modified the dropMemStoreContents() method by passing it the region which 
encounters Throwable.
{code}
  public MemStoreSize dropMemStoreContents(HRegion r) throws IOException {
...
  for (HStore s : stores.values()) {
if (!s.getHRegion().equals(r)) continue;
{code}
TestRecoveredEidtsReplayAndAbort passes with the above change.

> Memory leak if replay edits failed during region opening
> 
>
> Key: HBASE-21031
> URL: https://issues.apache.org/jira/browse/HBASE-21031
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21031.branch-2.0.001.patch, 
> HBASE-21031.branch-2.0.002.patch, HBASE-21031.branch-2.0.003.patch, 
> memoryleak.png
>
>
> Due to HBASE-21029, when replaying edits with a lot of same cells, the 
> memstore won't flush,  a exception will throw when all heap space was used:
> {code}
> 2018-08-06 15:52:27,590 ERROR 
> [RS_OPEN_REGION-regionserver/hb-bp10cw4ejoy0a2f3f-009:16020-2] 
> handler.OpenRegionHandler(302): Failed open of 
> region=hbase_test,dffa78,1531227033378.cbf9a2daf3aaa0c7e931e9c9a7b53f41., 
> starting to roll back the global memstore size.
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at 
> org.apache.hadoop.hbase.regionserver.OnheapChunk.allocateDataBuffer(OnheapChunk.java:41)
> at org.apache.hadoop.hbase.regionserver.Chunk.init(Chunk.java:104)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:180)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:163)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.getOrMakeChunk(MemStoreLABImpl.java:273)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:148)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:111)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.maybeCloneWithAllocator(Segment.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.maybeCloneWithAllocator(AbstractMemStore.java:287)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.add(AbstractMemStore.java:107)
> at org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:706)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.restoreEdit(HRegion.java:5494)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4608)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4404)
> {code}
> After this exception, the memstore did not roll back, and since MSLAB is 
> used, all the chunk allocated won't release for ever. Those memory is leak 
> forever...
> We need to rollback the memory if open region fails(For now, only global 
> memstore size is decreased after failure).
> Another problem is that we use replayEditsPerRegion in RegionServerAccounting 
> to record how many memory used during replaying. And decrease the global 
> memstore size if replay fails. This is not right, since during replaying, we 
> may also flush the memstore, the size in the map of replayEditsPerRegion is 
> not accurate at all! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-21030) Correct javadoc for append operation

2018-08-10 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-21030:
--

Assignee: Subrat Mishra

> Correct javadoc for append operation
> 
>
> Key: HBASE-21030
> URL: https://issues.apache.org/jira/browse/HBASE-21030
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 1.5.0
>Reporter: Nihal Jain
>Assignee: Subrat Mishra
>Priority: Minor
>  Labels: beginner, beginners
>
> The doc for {{append}} operation is incorrect. (see {{@param append}} in the 
> code snippet below or 
> [Table.java#L566|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L566])
> {code:java}
>   /**
>* Appends values to one or more columns within a single row.
>* 
>* This operation guaranteed atomicity to readers. Appends are done
>* under a single row lock, so write operations to a row are synchronized, 
> and
>* readers are guaranteed to see this operation fully completed.
>*
>* @param append object that specifies the columns and amounts to be used
>*  for the increment operations
>* @throws IOException e
>* @return values of columns after the append operation (maybe null)
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21011) Provide CLI option to run oldwals and hfiles cleaner separately when cleaner chore is disabled

2018-08-09 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575733#comment-16575733
 ] 

Ted Yu commented on HBASE-21011:


I may not have time to fully digest the proposal here.

The scenario seems to target a corner case (as mentioned in description).
I hope Reid can take another look at the latest patch (since he seems to have 
strong opinion).

Thanks

> Provide CLI option to run oldwals and hfiles cleaner separately when cleaner 
> chore is disabled
> --
>
> Key: HBASE-21011
> URL: https://issues.apache.org/jira/browse/HBASE-21011
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, Client
>Affects Versions: 3.0.0, 1.4.6, 2.1.1
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
> Attachments: HBASE-21011.master.001.patch, 
> HBASE-21011.master.002.patch, HBASE-21011.master.003.patch, 
> HBASE-21011.master.004.patch
>
>
> There is a corner case when cleaner chore for HFiles and oldwals is disabled, 
> admin/user needs to manually execute admin command {{cleaner_chore_run}} to 
> clean the old HFiles and oldwals. Existing logic of {{cleaner_chore_run}} is 
> to [firstly trigger the HFiles cleaner and then oldwals 
> cleaner|https://github.com/taklwu/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java#L1414-L1420],
>  and only return succeed if both completes. 
> but when running this {{cleaner_chore_run}} command, there is a potential use 
> case that admin would like trigger the cleaner for only oldwals or hfiles but 
> still keep the automatic cleaner chore disabled. So, this change aims to 
> provide support for this corner case, and provide flexibility for those user 
> with cleaner chore disabled by default to execute admin CLI to run oldwals 
> and HFiles cleaning procedure individually.
> NOTE that {{cleaner_chore_run}} was introduced in HBASE-17280, this patch 
> added options 'hfiles' and 'oldwals' to it. Also fix default behavior of 
> {{cleaner_chore_run}} will be only ran when cleaner chore is set to disabled, 
> e.g. the proposed admin CLI options are
> {noformat}
> hbase> cleaner_chore_run   # this was introduced in HBASE-17280, 
> but changed the behavior to only ran when cleaner chore is set to disabled
> hbase> cleaner_chore_run 'hfiles'  # added, ran when cleaner chore is set 
> to disabled
> hbase> cleaner_chore_run 'oldwals' # added, ran when cleaner chore is set 
> to disabled
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20943) Add offline/online region count into metrics

2018-08-09 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575127#comment-16575127
 ] 

Ted Yu commented on HBASE-20943:


lgtm

> Add offline/online region count into metrics
> 
>
> Key: HBASE-20943
> URL: https://issues.apache.org/jira/browse/HBASE-20943
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.0.0, 1.2.6.1
>Reporter: Tianying Chang
>Assignee: jinghan xu
>Priority: Minor
> Attachments: HBASE-20943-master-v1.patch, 
> HBASE-20943-master-v2.patch, HBASE-20943-master-v3.patch, HBASE-20943.patch, 
> Screen Shot 2018-07-25 at 2.51.19 PM.png
>
>
> We intensively use metrics to monitor the health of our HBase production 
> cluster. We have seen some regions of a table stuck and cannot be brought 
> online due to AWS issue which cause some log file corrupted. It will be good 
> if we can catch this early. Although WebUI has this information, it is not 
> useful for automated monitoring. By adding this metric, we can easily monitor 
> them with our monitoring system. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9825) Upgrade checkstyle version to 8.6

2018-08-09 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575017#comment-16575017
 ] 

Ted Yu commented on FLINK-9825:
---

Thanks, Dalong.

> Upgrade checkstyle version to 8.6
> -
>
> Key: FLINK-9825
> URL: https://issues.apache.org/jira/browse/FLINK-9825
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>    Reporter: Ted Yu
>Assignee: dalongliu
>Priority: Minor
>
> We should upgrade checkstyle version to 8.6+ so that we can use the "match 
> violation message to this regex" feature for suppression. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20943) Add offline/online region count into metrics

2018-08-09 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16574996#comment-16574996
 ] 

Ted Yu commented on HBASE-20943:


Looks good overall.
{code}
+  PairOfSameType getRegionNumbers();
{code}
getRegionNumbers -> getRegionCounts
And adjust corresponding javadoc

> Add offline/online region count into metrics
> 
>
> Key: HBASE-20943
> URL: https://issues.apache.org/jira/browse/HBASE-20943
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.0.0, 1.2.6.1
>Reporter: Tianying Chang
>Assignee: jinghan xu
>Priority: Minor
> Attachments: HBASE-20943-master-v1.patch, 
> HBASE-20943-master-v2.patch, HBASE-20943.patch, Screen Shot 2018-07-25 at 
> 2.51.19 PM.png
>
>
> We intensively use metrics to monitor the health of our HBase production 
> cluster. We have seen some regions of a table stuck and cannot be brought 
> online due to AWS issue which cause some log file corrupted. It will be good 
> if we can catch this early. Although WebUI has this information, it is not 
> useful for automated monitoring. By adding this metric, we can easily monitor 
> them with our monitoring system. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21021) Result returned by Append operation should be ordered

2018-08-09 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16574984#comment-16574984
 ] 

Ted Yu commented on HBASE-21021:


[~allan163]:
What do you think of Nihal's comment above ?

> Result returned by Append operation should be ordered
> -
>
> Key: HBASE-21021
> URL: https://issues.apache.org/jira/browse/HBASE-21021
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.5.0
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-21021.branch-1.001.patch
>
>
> *Problem:*
> The result returned by the append operation should be ordered. Currently, it 
> returns an unordered list, which may cause problems like if the user tries to 
> perform Result.getValue(byte[] family, byte[] qualifier), even if the 
> returned result has a value corresponding to (family, qualifier), the method 
> may return null as it performs a binary search over the  unsorted result 
> (which should have been sorted actually).
>  
> The result is enumerated by iterating over each entry of tempMemstore hashmap 
> (which will never be ordered) and adding the values (see 
> [HRegion.java#L7882|https://github.com/apache/hbase/blob/1b50fe53724aa62a242b7f64adf7845048df/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L7882]).
>  
> *Actual:* The returned result is unordered
> *Expected:* Similar to increment op, the returned result should be ordered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Java API to read metrics via JMX

2018-08-08 Thread Ted Yu
Boris:
BrokerWithJMX is referenced but I didn't find the class source after a
brief search.

FYI

On Wed, Aug 8, 2018 at 7:10 PM Boris Lublinsky <
boris.lublin...@lightbend.com> wrote:

> Its actually quite simple, unfortunately you have to read, and then write
> to TSDB.
> Enclosed is an example doing this and dumping to InfluxDB
>
>
> Boris Lublinsky
> FDP Architect
> boris.lublin...@lightbend.com
> https://www.lightbend.com/
>
> On Aug 8, 2018, at 8:46 PM, Raghav  wrote:
>
> Hi
>
> Is there any Java API available so that I can enable our Kafka cluster's
> JMX port, and consume metrics via JMX api, and dump to a time series
> database.
>
> I checked out jmxtrans, but currently it does not dump to TSDB (time series
> database).
>
> Thanks.
>
> R
>
>
>


[jira] [Commented] (KAFKA-7175) Make version checking logic more flexible in streams_upgrade_test.py

2018-08-08 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16574181#comment-16574181
 ] 

Ted Yu commented on KAFKA-7175:
---

Thanks for taking this, Ray.

> Make version checking logic more flexible in streams_upgrade_test.py
> 
>
> Key: KAFKA-7175
> URL: https://issues.apache.org/jira/browse/KAFKA-7175
> Project: Kafka
>  Issue Type: Improvement
>    Reporter: Ted Yu
>Assignee: Ray Chiang
>Priority: Major
>  Labels: newbie++
>
> During debugging of system test failure for KAFKA-5037, it was re-discovered 
> that the version numbers inside version probing related messages are hard 
> coded in streams_upgrade_test.py
> This is in-flexible.
> We should correlate latest version from Java class with the expected version 
> numbers.
> Matthias made the following suggestion:
> We should also make this more generic and test upgrades from 3 -> 4, 3 -> 5 
> and 4 -> 5. The current code does only go from latest version to future 
> version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KYLIN-3450) Consider using google re2j

2018-08-08 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KYLIN-3450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16556761#comment-16556761
 ] 

Ted Yu edited comment on KYLIN-3450 at 8/9/18 1:55 AM:
---

It would be nice to see improvement from using re2j .


was (Author: yuzhih...@gmail.com):
It would be nice to see improvement from using re2j

> Consider using google re2j
> --
>
> Key: KYLIN-3450
> URL: https://issues.apache.org/jira/browse/KYLIN-3450
> Project: Kylin
>  Issue Type: Improvement
>    Reporter: Ted Yu
>Priority: Minor
>
> RE2J : https://github.com/google/re2j
> For regular expression patterns with a high degree of alternation, using RE2J 
> would exhibit higher performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9849) Create hbase connector for hbase version to 2.0.1

2018-08-08 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-9849:
--
Description: 
Currently hbase 1.4.3 is used for hbase connector.

We should create connector for hbase 2.0.1 which was recently released.

  was:
Currently hbase 1.4.3 is used for hbase connector.

We should upgrade to 2.0.1 which was recently released.


> Create hbase connector for hbase version to 2.0.1
> -
>
> Key: FLINK-9849
> URL: https://issues.apache.org/jira/browse/FLINK-9849
> Project: Flink
>  Issue Type: Improvement
>    Reporter: Ted Yu
>Assignee: zhangminglei
>Priority: Major
>  Labels: pull-request-available
> Attachments: hbase-2.1.0.dep
>
>
> Currently hbase 1.4.3 is used for hbase connector.
> We should create connector for hbase 2.0.1 which was recently released.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7174) Improve version probing of subscription info

2018-08-08 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-7174:
--
Labels: compatibility  (was: )

> Improve version probing of subscription info
> 
>
> Key: KAFKA-7174
> URL: https://issues.apache.org/jira/browse/KAFKA-7174
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>    Reporter: Ted Yu
>Priority: Major
>  Labels: compatibility
>
> During code review for KAFKA-5037, [~guozhang] made the following suggestion:
> Currently the version probing works as the following:
> when leader receives the subscription info encoded with a higher version that 
> it can understand (e.g. the leader is on version 3, while one of the 
> subscription received is encode with version 4), it will send back an empty 
> assignment with the assignment encoded with version 3, and also 
> latestSupportedVersion set to 3.
> when the member receives the assignment, it checks if latestSupportedVersion 
> is smaller than the version it used for encoding the sent subscription (i.e. 
> the above logic). If it is smaller, then it means that leader cannot 
> understand, in this case, version 4. It will then set the flag and then 
> re-subscribe but with a down-graded encoding format of version 3.
> NOW with PR #5322, we can let leader to clearly communicate this error via 
> the error code, and upon receiving the assignment, if the error code is 
> VERSION_PROBING, then the member can immediately know what happens, and hence 
> can simplify the above logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9924) Upgrade zookeeper to 3.4.13

2018-08-08 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-9924:
--
Description: 
zookeeper 3.4.13 is being released.

ZOOKEEPER-2959 fixes data loss when observer is used
ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / 
cloud)
environment

  was:
zookeeper 3.4.13 is being released.


ZOOKEEPER-2959 fixes data loss when observer is used
ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / 
cloud)
environment


> Upgrade zookeeper to 3.4.13
> ---
>
> Key: FLINK-9924
> URL: https://issues.apache.org/jira/browse/FLINK-9924
> Project: Flink
>  Issue Type: Task
>    Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Major
>
> zookeeper 3.4.13 is being released.
> ZOOKEEPER-2959 fixes data loss when observer is used
> ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container 
> / cloud)
> environment



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2332) Potentially overflowing expression in UnsafeFixLengthColumnPage

2018-08-08 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated CARBONDATA-2332:
---
Description: 
Here is one example from getFloatPage :
{code}
for (int i = 0; i < data.length; i++) {
  long offset = i << floatBits;
{code}
The shift expression with type "int" (32 bits, signed) is evaluated using 
32-bit arithmetic.
But the variable offset is of type long.

There are a few other shift expressions of this nature.

  was:
Here is one example from getFloatPage :

{code}
for (int i = 0; i < data.length; i++) {
  long offset = i << floatBits;
{code}
The shift expression with type "int" (32 bits, signed) is evaluated using 
32-bit arithmetic.
But the variable offset is of type long.

There are a few other shift expressions of this nature.


> Potentially overflowing expression in UnsafeFixLengthColumnPage
> ---
>
> Key: CARBONDATA-2332
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2332
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> Here is one example from getFloatPage :
> {code}
> for (int i = 0; i < data.length; i++) {
>   long offset = i << floatBits;
> {code}
> The shift expression with type "int" (32 bits, signed) is evaluated using 
> 32-bit arithmetic.
> But the variable offset is of type long.
> There are a few other shift expressions of this nature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (BAHIR-172) Avoid FileInputStream/FileOutputStream

2018-08-08 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/BAHIR-172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated BAHIR-172:
-
Description: 
They rely on finalizers (before Java 11), which create unnecessary GC load.


The alternatives, {{Files.newInputStream}}, are as easy to use and don't have 
this issue.

  was:
They rely on finalizers (before Java 11), which create unnecessary GC load.

The alternatives, {{Files.newInputStream}}, are as easy to use and don't have 
this issue.


> Avoid FileInputStream/FileOutputStream
> --
>
> Key: BAHIR-172
> URL: https://issues.apache.org/jira/browse/BAHIR-172
> Project: Bahir
>  Issue Type: Bug
>    Reporter: Ted Yu
>Priority: Minor
>
> They rely on finalizers (before Java 11), which create unnecessary GC load.
> The alternatives, {{Files.newInputStream}}, are as easy to use and don't have 
> this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20995) Clean Up Manual Array Copies, trivial

2018-08-08 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16573875#comment-16573875
 ] 

Ted Yu commented on HBASE-20995:


For the change in HBaseAdmin:
{code}
835   if (startKeys.length - 1 >= 0) System.arraycopy(startKeys, 1, 
splits, 0, startKeys.length - 1);
{code}
If startKeys.length == 1, the loop in original code wouldn't do anything.

> Clean Up Manual Array Copies, trivial
> -
>
> Key: HBASE-20995
> URL: https://issues.apache.org/jira/browse/HBASE-20995
> Project: HBase
>  Issue Type: Improvement
>Reporter: John Leach
>Assignee: John Leach
>Priority: Trivial
> Attachments: HBASE-20995.patch
>
>
> Clean up manual array copies in code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21026) Fix Backup/Restore command usage bug in book

2018-08-08 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21026:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Mingliang

Thanks for the reviews.

> Fix Backup/Restore command usage bug in book
> 
>
> Key: HBASE-21026
> URL: https://issues.apache.org/jira/browse/HBASE-21026
> Project: HBase
>  Issue Type: Bug
>  Components: backuprestore, documentation
>Affects Versions: 2.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-21026.001.patch, HBASE-21026.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21026) Fix Backup/Restore command usage bug in book

2018-08-08 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16573662#comment-16573662
 ] 

Ted Yu commented on HBASE-21026:


+1

> Fix Backup/Restore command usage bug in book
> 
>
> Key: HBASE-21026
> URL: https://issues.apache.org/jira/browse/HBASE-21026
> Project: HBase
>  Issue Type: Bug
>  Components: backuprestore, documentation
>Affects Versions: 2.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HBASE-21026.001.patch, HBASE-21026.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Error reading field 'version' when execute kafka-consumer-groups.sh

2018-08-08 Thread Ted Yu
For the last error, please see KAFKA-6868

On Wed, Aug 8, 2018 at 8:18 AM 刘鹏  wrote:

> hi,
> I get an error when I run kafka-consumer-groups.sh.
>
>
> firstly,I run:
> ./kafka-consumer-groups.sh --command-config consumer.properties
> --bootstrap-server ip:port --list
>
> the result is:
> Note: This will not show information about old Zookeeper-based
> consumers.
> mygroup
>
> then,I run:
>./kafka-consumer-groups.sh --command-config consumer.properties
> --bootstrap-server ip:port --describe --group mygroup
>
>
>the output I get is:
>Note: This will not show information about old Zookeeper-based
> consumers.
>Error: Executing consumer group command failed due to Error reading
> field 'version': java.nio.BufferUnderflowException
>
>
>my kafka version is:2.11-1.1.0,zookeeper version is:3.4.6,I have a 5
> nodes zk cluster and a 5 node kafka brokers cluster,they are in the same 5
> machine.I am using sarama-cluster as kafka client in golang.
>
>
>How can I find more detail of the error? Is anyone have some idea when
> this error occur?
>
>
>thanks a lot.
>
>
>
>
> thank you
> liupeng
>
>


[jira] [Assigned] (HBASE-21018) RS crashed because AsyncFS was unable to update HDFS data encryption key

2018-08-08 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-21018:
--

Assignee: Wei-Chiu Chuang

> RS crashed because AsyncFS was unable to update HDFS data encryption key
> 
>
> Key: HBASE-21018
> URL: https://issues.apache.org/jira/browse/HBASE-21018
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0
> Environment: Hadoop 3.0.0, HBase 2.0.0, 
> HDFS configuration dfs.encrypt.data.transfer = true
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
> Attachments: HBASE-21018.master.001.patch
>
>
> We (+[~uagashe]) found HBase RegionServer doesn't update HDFS data encryption 
> key correctly, and in some cases after retry 10 times, it aborts.
> {noformat}
> 2018-08-03 17:37:03,233 WARN 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper: create 
> fan-out dfs output 
> /hbase/WALs/rs1.example.com,22101,1533318719239/rs1.example.com%2C22101%2C1533318719239.rs1.example.com%2C22101%2C1533318719239.regiongroup-0.1533343022981
>  failed, retry = 1
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=1685436998) doesn't exist. Current key: 1085959374
> at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper$SaslNegotiateHandler.check(FanOutOneBlockAsyncDFSOutputSaslHelper.java:399)
> at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper$SaslNegotiateHandler.channelRead(FanOutOneBlockAsyncDFSOutputSaslHelper.java:470)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPi

Re: Kafka flow

2018-08-08 Thread Ted Yu
Have you looked at http://kafka.apache.org/documentation/ ?

Chapters 4 and 5.

FYI

On Wed, Aug 8, 2018 at 1:46 AM darekAsz  wrote:

> Hi,
> I need informations about workflow in kafka, I want to know what is going
> on when producer send message, as much details as possible about
> requests-responses. Can anyone tell me when I can find this?
> Best regards
>


Re: [VOTE] KIP-289: Improve the default group id behavior in KafkaConsumer

2018-08-08 Thread Ted Yu
+1

On Wed, Aug 8, 2018 at 4:09 AM Mickael Maison 
wrote:

> +1 (non-binding)
> Thanks Vahid
> On Wed, Aug 8, 2018 at 11:26 AM Kamal Chandraprakash
>  wrote:
> >
> > +1 (non-binding)
> >
> > Thanks for the KIP.
> >
> > On Wed, Aug 8, 2018 at 3:11 PM Stanislav Kozlovski <
> stanis...@confluent.io>
> > wrote:
> >
> > > +1 (non-binding)
> > > Thanks!
> > >
> > > On Tue, Aug 7, 2018 at 11:47 PM Jason Gustafson 
> > > wrote:
> > >
> > > > +1 Thanks Vahid.
> > > >
> > > > On Tue, Aug 7, 2018 at 11:14 AM, Vahid S Hashemian <
> > > > vahidhashem...@us.ibm.com> wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I'd like to start a vote on KIP-289 to modify the default group id
> of
> > > > > KafkaConsumer.
> > > > > The KIP:
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 289%3A+Improve+the+default+group+id+behavior+in+KafkaConsumer
> > > > > The discussion thread:
> > > > > https://www.mail-archive.com/dev@kafka.apache.org/msg87379.html
> > > > >
> > > > > Thanks!
> > > > > --Vahid
> > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > Best,
> > > Stanislav
> > >
>


[jira] [Commented] (KAFKA-6647) KafkaStreams.cleanUp creates .lock file in directory its trying to clean (Windows OS)

2018-08-08 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16572793#comment-16572793
 ] 

Ted Yu commented on KAFKA-6647:
---

Using the change from pull request 4702, does the test pass ?
I don't have access to windows, hence the question. 

Thanks

> KafkaStreams.cleanUp creates .lock file in directory its trying to clean 
> (Windows OS)
> -
>
> Key: KAFKA-6647
> URL: https://issues.apache.org/jira/browse/KAFKA-6647
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.1
> Environment: windows 10.
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
> org.apache.kafka:kafka-streams:1.0.1
> Kafka commitId : c0518aa65f25317e
>Reporter: George Bloggs
>Priority: Minor
>  Labels: streams
>
> When calling kafkaStreams.cleanUp() before starting a stream the 
> StateDirectory.cleanRemovedTasks() method contains this check:
> {code:java}
> ... Line 240
>   if (lock(id, 0)) {
> long now = time.milliseconds();
> long lastModifiedMs = taskDir.lastModified();
> if (now > lastModifiedMs + cleanupDelayMs) {
> log.info("{} Deleting obsolete state directory {} 
> for task {} as {}ms has elapsed (cleanup delay is {}ms)", logPrefix(), 
> dirName, id, now - lastModifiedMs, cleanupDelayMs);
> Utils.delete(taskDir);
> }
> }
> {code}
> The check for lock(id,0) will create a .lock file in the directory that 
> subsequently is going to be deleted. If the .lock file already exists from a 
> previous run the attempt to delete the .lock file fails with 
> AccessDeniedException.
> This leaves the .lock file in the taskDir. Calling Utils.delete(taskDir) will 
> then attempt to remove the taskDir path calling Files.delete(path).
> The call to files.delete(path) in postVisitDirectory will then fail 
> java.nio.file.DirectoryNotEmptyException as the failed attempt to delete the 
> .lock file left the directory not empty. (o.a.k.s.p.internals.StateDirectory  
>  : stream-thread [restartedMain] Failed to lock the state directory due 
> to an unexpected exception)
> This seems to then cause issues using streams from a topic to an inMemory 
> store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21024) Add RSgroup info for Region Servers and Tables

2018-08-07 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16572614#comment-16572614
 ] 

Ted Yu commented on HBASE-21024:


Seems interesting.

Want to attach a patch ?

> Add RSgroup info for Region Servers and Tables
> --
>
> Key: HBASE-21024
> URL: https://issues.apache.org/jira/browse/HBASE-21024
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, UI
>Reporter: wenbang
>Priority: Minor
> Attachments: image-2018-08-08-11-06-09-718.png, 
> image-2018-08-08-11-06-33-163.png, image-2018-08-08-11-07-06-721.png
>
>
> HBASE-19799 have added the rsgroup webui
> Should we also show RSGroup info for Region server and table in webui when 
> the RSGroup feature is enabled !image-2018-08-08-11-06-09-718.png!
> !image-2018-08-08-11-07-06-721.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20943) Add offline/online region count into metrics

2018-08-07 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16572540#comment-16572540
 ] 

Ted Yu commented on HBASE-20943:


{code}
60String ONLINE_REGION_NUMBER_NAME = "onlineRegionNumber";
{code}
Number is often associated with ordinal. e.g. Engine number 9.

Please name the metric onlineRegionCount.
Same with offline region count.

The rest looks fine.

> Add offline/online region count into metrics
> 
>
> Key: HBASE-20943
> URL: https://issues.apache.org/jira/browse/HBASE-20943
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.0.0, 1.2.6.1
>Reporter: Tianying Chang
>Assignee: jinghan xu
>Priority: Minor
> Attachments: HBASE-20943.patch, Screen Shot 2018-07-25 at 2.51.19 
> PM.png
>
>
> We intensively use metrics to monitor the health of our HBase production 
> cluster. We have seen some regions of a table stuck and cannot be brought 
> online due to AWS issue which cause some log file corrupted. It will be good 
> if we can catch this early. Although WebUI has this information, it is not 
> useful for automated monitoring. By adding this metric, we can easily monitor 
> them with our monitoring system. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9824) Support IPv6 literal

2018-08-07 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-9824:
--
Description: 
Currently we use colon as separator when parsing host and port.

We should support the usage of IPv6 literals in parsing .

  was:
Currently we use colon as separator when parsing host and port.

We should support the usage of IPv6 literals in parsing.


> Support IPv6 literal
> 
>
> Key: FLINK-9824
> URL: https://issues.apache.org/jira/browse/FLINK-9824
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>    Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> Currently we use colon as separator when parsing host and port.
> We should support the usage of IPv6 literals in parsing .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20968) list_procedures_test fails due to no matching regex

2018-08-07 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20968:
---
Attachment: 20968.v2.txt

> list_procedures_test fails due to no matching regex
> ---
>
> Key: HBASE-20968
> URL: https://issues.apache.org/jira/browse/HBASE-20968
> Project: HBase
>  Issue Type: Test
>    Reporter: Ted Yu
>    Assignee: Ted Yu
>Priority: Major
> Attachments: 20968.v2.txt, 
> org.apache.hadoop.hbase.client.TestShell-output.txt
>
>
> From test output against hadoop3:
> {code}
> 2018-07-28 12:04:24,838 DEBUG [Time-limited test] 
> procedure2.ProcedureExecutor(948): Stored pid=12, state=RUNNABLE, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.  
> ShellTestProcedure
> 2018-07-28 12:04:24,864 INFO  [RS-EventLoopGroup-1-3] 
> ipc.ServerRpcConnection(556): Connection from 172.18.128.12:46918, 
> version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth: SIMPLE), 
> service=MasterService
> 2018-07-28 12:04:24,900 DEBUG [Thread-114] master.MasterRpcServices(1157): 
> Checking to see if procedure is done pid=11
> ^[[38;5;196mF^[[0m
> ===
> Failure: 
> ^[[48;5;124;38;5;231;1mtest_list_procedures(Hbase::ListProceduresTest)^[[0m
> src/test/ruby/shell/list_procedures_test.rb:65:in `block in 
> test_list_procedures'
>  62: end
>  63:   end
>  64:
> ^[[48;5;124;38;5;231;1m  => 65:   assert_equal(1, matching_lines)^[[0m
>  66: end
>  67:   end
>  68: end
> <^[[48;5;34;38;5;231;1m1^[[0m> expected but was
> <^[[48;5;124;38;5;231;1m0^[[0m>
> ===
> ...
> 2018-07-28 12:04:25,374 INFO  [PEWorker-9] 
> procedure2.ProcedureExecutor(1316): Finished pid=12, state=SUCCESS, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.   
> ShellTestProcedure in 336msec
> {code}
> The completion of the ShellTestProcedure was after the assertion was raised.
> {code}
> def create_procedure_regexp(table_name)
>   regexp_string = '[0-9]+ .*ShellTestProcedure SUCCESS.*' \
> {code}
> The regex used by the test isn't found in test output either.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20968) list_procedures_test fails due to no matching regex

2018-08-07 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20968:
---
Attachment: (was: 20968.v1.txt)

> list_procedures_test fails due to no matching regex
> ---
>
> Key: HBASE-20968
> URL: https://issues.apache.org/jira/browse/HBASE-20968
> Project: HBase
>  Issue Type: Test
>    Reporter: Ted Yu
>    Assignee: Ted Yu
>Priority: Major
> Attachments: org.apache.hadoop.hbase.client.TestShell-output.txt
>
>
> From test output against hadoop3:
> {code}
> 2018-07-28 12:04:24,838 DEBUG [Time-limited test] 
> procedure2.ProcedureExecutor(948): Stored pid=12, state=RUNNABLE, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.  
> ShellTestProcedure
> 2018-07-28 12:04:24,864 INFO  [RS-EventLoopGroup-1-3] 
> ipc.ServerRpcConnection(556): Connection from 172.18.128.12:46918, 
> version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth: SIMPLE), 
> service=MasterService
> 2018-07-28 12:04:24,900 DEBUG [Thread-114] master.MasterRpcServices(1157): 
> Checking to see if procedure is done pid=11
> ^[[38;5;196mF^[[0m
> ===
> Failure: 
> ^[[48;5;124;38;5;231;1mtest_list_procedures(Hbase::ListProceduresTest)^[[0m
> src/test/ruby/shell/list_procedures_test.rb:65:in `block in 
> test_list_procedures'
>  62: end
>  63:   end
>  64:
> ^[[48;5;124;38;5;231;1m  => 65:   assert_equal(1, matching_lines)^[[0m
>  66: end
>  67:   end
>  68: end
> <^[[48;5;34;38;5;231;1m1^[[0m> expected but was
> <^[[48;5;124;38;5;231;1m0^[[0m>
> ===
> ...
> 2018-07-28 12:04:25,374 INFO  [PEWorker-9] 
> procedure2.ProcedureExecutor(1316): Finished pid=12, state=SUCCESS, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.   
> ShellTestProcedure in 336msec
> {code}
> The completion of the ShellTestProcedure was after the assertion was raised.
> {code}
> def create_procedure_regexp(table_name)
>   regexp_string = '[0-9]+ .*ShellTestProcedure SUCCESS.*' \
> {code}
> The regex used by the test isn't found in test output either.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20968) list_procedures_test fails due to no matching regex

2018-08-07 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20968:
---
Attachment: 20968.v1.txt

> list_procedures_test fails due to no matching regex
> ---
>
> Key: HBASE-20968
> URL: https://issues.apache.org/jira/browse/HBASE-20968
> Project: HBase
>  Issue Type: Test
>    Reporter: Ted Yu
>    Assignee: Ted Yu
>Priority: Major
> Attachments: 20968.v1.txt, 
> org.apache.hadoop.hbase.client.TestShell-output.txt
>
>
> From test output against hadoop3:
> {code}
> 2018-07-28 12:04:24,838 DEBUG [Time-limited test] 
> procedure2.ProcedureExecutor(948): Stored pid=12, state=RUNNABLE, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.  
> ShellTestProcedure
> 2018-07-28 12:04:24,864 INFO  [RS-EventLoopGroup-1-3] 
> ipc.ServerRpcConnection(556): Connection from 172.18.128.12:46918, 
> version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth: SIMPLE), 
> service=MasterService
> 2018-07-28 12:04:24,900 DEBUG [Thread-114] master.MasterRpcServices(1157): 
> Checking to see if procedure is done pid=11
> ^[[38;5;196mF^[[0m
> ===
> Failure: 
> ^[[48;5;124;38;5;231;1mtest_list_procedures(Hbase::ListProceduresTest)^[[0m
> src/test/ruby/shell/list_procedures_test.rb:65:in `block in 
> test_list_procedures'
>  62: end
>  63:   end
>  64:
> ^[[48;5;124;38;5;231;1m  => 65:   assert_equal(1, matching_lines)^[[0m
>  66: end
>  67:   end
>  68: end
> <^[[48;5;34;38;5;231;1m1^[[0m> expected but was
> <^[[48;5;124;38;5;231;1m0^[[0m>
> ===
> ...
> 2018-07-28 12:04:25,374 INFO  [PEWorker-9] 
> procedure2.ProcedureExecutor(1316): Finished pid=12, state=SUCCESS, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.   
> ShellTestProcedure in 336msec
> {code}
> The completion of the ShellTestProcedure was after the assertion was raised.
> {code}
> def create_procedure_regexp(table_name)
>   regexp_string = '[0-9]+ .*ShellTestProcedure SUCCESS.*' \
> {code}
> The regex used by the test isn't found in test output either.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20968) list_procedures_test fails due to no matching regex

2018-08-07 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20968:
---
Status: Patch Available  (was: Open)

> list_procedures_test fails due to no matching regex
> ---
>
> Key: HBASE-20968
> URL: https://issues.apache.org/jira/browse/HBASE-20968
> Project: HBase
>  Issue Type: Test
>    Reporter: Ted Yu
>    Assignee: Ted Yu
>Priority: Major
> Attachments: 20968.v1.txt, 
> org.apache.hadoop.hbase.client.TestShell-output.txt
>
>
> From test output against hadoop3:
> {code}
> 2018-07-28 12:04:24,838 DEBUG [Time-limited test] 
> procedure2.ProcedureExecutor(948): Stored pid=12, state=RUNNABLE, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.  
> ShellTestProcedure
> 2018-07-28 12:04:24,864 INFO  [RS-EventLoopGroup-1-3] 
> ipc.ServerRpcConnection(556): Connection from 172.18.128.12:46918, 
> version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth: SIMPLE), 
> service=MasterService
> 2018-07-28 12:04:24,900 DEBUG [Thread-114] master.MasterRpcServices(1157): 
> Checking to see if procedure is done pid=11
> ^[[38;5;196mF^[[0m
> ===
> Failure: 
> ^[[48;5;124;38;5;231;1mtest_list_procedures(Hbase::ListProceduresTest)^[[0m
> src/test/ruby/shell/list_procedures_test.rb:65:in `block in 
> test_list_procedures'
>  62: end
>  63:   end
>  64:
> ^[[48;5;124;38;5;231;1m  => 65:   assert_equal(1, matching_lines)^[[0m
>  66: end
>  67:   end
>  68: end
> <^[[48;5;34;38;5;231;1m1^[[0m> expected but was
> <^[[48;5;124;38;5;231;1m0^[[0m>
> ===
> ...
> 2018-07-28 12:04:25,374 INFO  [PEWorker-9] 
> procedure2.ProcedureExecutor(1316): Finished pid=12, state=SUCCESS, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.   
> ShellTestProcedure in 336msec
> {code}
> The completion of the ShellTestProcedure was after the assertion was raised.
> {code}
> def create_procedure_regexp(table_name)
>   regexp_string = '[0-9]+ .*ShellTestProcedure SUCCESS.*' \
> {code}
> The regex used by the test isn't found in test output either.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20968) list_procedures_test fails due to no matching regex

2018-08-07 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-20968:
--

Assignee: Ted Yu

> list_procedures_test fails due to no matching regex
> ---
>
> Key: HBASE-20968
> URL: https://issues.apache.org/jira/browse/HBASE-20968
> Project: HBase
>  Issue Type: Test
>    Reporter: Ted Yu
>    Assignee: Ted Yu
>Priority: Major
> Attachments: org.apache.hadoop.hbase.client.TestShell-output.txt
>
>
> From test output against hadoop3:
> {code}
> 2018-07-28 12:04:24,838 DEBUG [Time-limited test] 
> procedure2.ProcedureExecutor(948): Stored pid=12, state=RUNNABLE, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.  
> ShellTestProcedure
> 2018-07-28 12:04:24,864 INFO  [RS-EventLoopGroup-1-3] 
> ipc.ServerRpcConnection(556): Connection from 172.18.128.12:46918, 
> version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth: SIMPLE), 
> service=MasterService
> 2018-07-28 12:04:24,900 DEBUG [Thread-114] master.MasterRpcServices(1157): 
> Checking to see if procedure is done pid=11
> ^[[38;5;196mF^[[0m
> ===
> Failure: 
> ^[[48;5;124;38;5;231;1mtest_list_procedures(Hbase::ListProceduresTest)^[[0m
> src/test/ruby/shell/list_procedures_test.rb:65:in `block in 
> test_list_procedures'
>  62: end
>  63:   end
>  64:
> ^[[48;5;124;38;5;231;1m  => 65:   assert_equal(1, matching_lines)^[[0m
>  66: end
>  67:   end
>  68: end
> <^[[48;5;34;38;5;231;1m1^[[0m> expected but was
> <^[[48;5;124;38;5;231;1m0^[[0m>
> ===
> ...
> 2018-07-28 12:04:25,374 INFO  [PEWorker-9] 
> procedure2.ProcedureExecutor(1316): Finished pid=12, state=SUCCESS, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.   
> ShellTestProcedure in 336msec
> {code}
> The completion of the ShellTestProcedure was after the assertion was raised.
> {code}
> def create_procedure_regexp(table_name)
>   regexp_string = '[0-9]+ .*ShellTestProcedure SUCCESS.*' \
> {code}
> The regex used by the test isn't found in test output either.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20988) TestShell shouldn't be skipped for hbase-shell module test

2018-08-07 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16572463#comment-16572463
 ] 

Ted Yu commented on HBASE-20988:


If TestShell is skipped, QA bot shouldn't state:

bq. hbase-shell in the patch passed

> TestShell shouldn't be skipped for hbase-shell module test
> --
>
> Key: HBASE-20988
> URL: https://issues.apache.org/jira/browse/HBASE-20988
> Project: HBase
>  Issue Type: Test
>    Reporter: Ted Yu
>Priority: Major
>
> Here is snippet for QA run 13862 for HBASE-20985 :
> {code}
> 13:42:50 cd /testptch/hbase/hbase-shell
> 13:42:50 /usr/share/maven/bin/mvn 
> -Dmaven.repo.local=/home/jenkins/yetus-m2/hbase-master-patch-1 
> -DHBasePatchProcess -PrunAllTests 
> -Dtest.exclude.pattern=**/master.normalizer.
> TestSimpleRegionNormalizerOnCluster.java,**/replication.regionserver.TestSerialReplicationEndpoint.java,**/master.procedure.TestServerCrashProcedure.java,**/master.procedure.TestCreateTableProcedure.
> 
> java,**/TestClientOperationTimeout.java,**/client.TestSnapshotFromClientWithRegionReplicas.java,**/master.TestAssignmentManagerMetrics.java,**/client.TestShell.java,**/client.
> 
> TestCloneSnapshotFromClientWithRegionReplicas.java,**/master.TestDLSFSHLog.java,**/replication.TestReplicationSmallTestsSync.java,**/master.procedure.TestModifyTableProcedure.java,**/regionserver.
>
> TestCompactionInDeadRegionServer.java,**/client.TestFromClientSide3.java,**/master.procedure.TestRestoreSnapshotProcedure.java,**/client.TestRestoreSnapshotFromClient.java,**/security.access.
> 
> TestCoprocessorWhitelistMasterObserver.java,**/replication.regionserver.TestDrainReplicationQueuesForStandBy.java,**/master.procedure.TestProcedurePriority.java,**/master.locking.TestLockProcedure.
>   
> java,**/master.cleaner.TestSnapshotFromMaster.java,**/master.assignment.TestSplitTableRegionProcedure.java,**/client.TestMobRestoreSnapshotFromClient.java,**/replication.TestReplicationKillSlaveRS.
>   
> java,**/regionserver.TestHRegion.java,**/security.access.TestAccessController.java,**/master.procedure.TestTruncateTableProcedure.java,**/client.TestAsyncReplicationAdminApiWithClusters.java,**/
>  
> coprocessor.TestMetaTableMetrics.java,**/client.TestMobSnapshotCloneIndependence.java,**/namespace.TestNamespaceAuditor.java,**/master.TestMasterAbortAndRSGotKilled.java,**/client.TestAsyncTable.java,**/master.TestMasterOperationsForRegionReplicas.java,**/util.TestFromClientSide3WoUnsafe.java,**/client.TestSnapshotCloneIndependence.java,**/client.TestAsyncDecommissionAdminApi.java,**/client.
> 
> TestRestoreSnapshotFromClientWithRegionReplicas.java,**/master.assignment.TestMasterAbortWhileMergingTable.java,**/client.TestFromClientSide.java,**/client.TestAdmin1.java,**/client.
>  
> TestFromClientSideWithCoprocessor.java,**/replication.TestReplicationKillSlaveRSWithSeparateOldWALs.java,**/master.procedure.TestMasterFailoverWithProcedures.java,**/regionserver.
> TestSplitTransactionOnCluster.java clean test -fae > 
> /testptch/patchprocess/patch-unit-hbase-shell.txt 2>&1
> {code}
> In this case, there was modification to shell script, leading to running 
> shell tests.
> However, TestShell was excluded in the QA run, defeating the purpose.
> Meanwhile QA posted the following onto HBASE-20985 :
> bq. +1unit7m 4s   hbase-shell in the patch passed.
> That is misleading - no related test was actually run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-6105) Properly handle InterruptedException in HadoopInputFormatBase

2018-08-07 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16307281#comment-16307281
 ] 

Ted Yu edited comment on FLINK-6105 at 8/7/18 10:33 PM:


In 
flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java
 :
{code}
  try {
Thread.sleep(500);
  } catch (InterruptedException e1) {
// ignore it
  }
{code}
Interrupt status should be restored, or throw InterruptedIOException .


was (Author: yuzhih...@gmail.com):
In 
flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java
 :

{code}
  try {
Thread.sleep(500);
  } catch (InterruptedException e1) {
// ignore it
  }
{code}
Interrupt status should be restored, or throw InterruptedIOException .

> Properly handle InterruptedException in HadoopInputFormatBase
> -
>
> Key: FLINK-6105
> URL: https://issues.apache.org/jira/browse/FLINK-6105
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>    Reporter: Ted Yu
>Assignee: zhangminglei
>Priority: Major
>
> When catching InterruptedException, we should throw InterruptedIOException 
> instead of IOException.
> The following example is from HadoopInputFormatBase :
> {code}
> try {
>   splits = this.mapreduceInputFormat.getSplits(jobContext);
> } catch (InterruptedException e) {
>   throw new IOException("Could not get Splits.", e);
> }
> {code}
> There may be other places where IOE is thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CALCITE-1903) Support charset in LIKE expression

2018-08-07 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/CALCITE-1903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated CALCITE-1903:

Description: 
>From 
>http://search-hadoop.com/m/Flink/VkLeQbjp59T6PkF?subj=+How+can+I+set+charset+for+flink+sql+

The following query

{code}
SELECT
'HIGH' AS LEVEL,
'Firewall uplink bandwidth exception:greater than 1' AS content,
`system.process.username`,
`system.process.memory.rss.bytes`
FROM
test
WHERE
`system.process.username` LIKE '%高危%'
{code}
produces the following:
{code}
Caused by: org.apache.calcite.runtime.CalciteException: Failed to encode '%高危%' 
in character set 'ISO-8859-1'
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method) ~[na:1.8.0_45]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown 
Source) ~[na:1.8.0_45]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
Source) ~[na:1.8.0_45]
at java.lang.reflect.Constructor.newInstance(Unknown Source) 
~[na:1.8.0_45]
at 
org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:463) 
~[flink-table_2.11-1.3.1.jar:1.3.1]
at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:572) 
~[flink-table_2.11-1.3.1.jar:1.3.1]
at org.apache.calcite.util.NlsString.(NlsString.java:81) 
~[flink-table_2.11-1.3.1.jar:1.3.1]
at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:864) 
~[flink-table_2.11-1.3.1.jar:1.3.1]
at 
org.apache.calcite.rex.RexBuilder.makeCharLiteral(RexBuilder.java:1051) 
~[flink-table_2.11-1.3.1.jar:1.3.1]
at 
org.apache.calcite.sql2rel.SqlNodeToRexConverterImpl.convertLiteral(SqlNodeToRexConverterImpl.java:117)
 ~[flink-table_2.11-1.3.1.jar:1.3.1]
at 
org.apache.calcite.sql2rel.SqlToRelConverter$Blackboard.visit(SqlToRelConverter.java:4408)
 ~[flink-table_2.11-1.3.1.jar:1.3.1]
at 
org.apache.calcite.sql2rel.SqlToRelConverter$Blackboard.visit(SqlToRelConverter.java:3787)
 ~[flink-table_2.11-1.3.1.jar:1.3.1]
at org.apache.calcite.sql.SqlLiteral.accept(SqlLiteral.java:427) 
~[flink-table_2.11-1.3.1.jar:1.3.1]
at 
org.apache.calcite.sql2rel.SqlToRelConverter$Blackboard.convertExpression(SqlToRelConverter.java:4321)
 ~[flink-table_2.11-1.3.1.jar:1.3.1]
at 
org.apache.calcite.sql2rel.StandardConvertletTable.convertExpressionList(StandardConvertletTable.java:968)
 ~[flink-table_2.11-1.3.1.jar:1.3.1]
at 
org.apache.calcite.sql2rel.StandardConvertletTable.convertCall(StandardConvertletTable.java:944)
 ~[flink-table_2.11-1.3.1.jar:1.3.1]
at 
org.apache.calcite.sql2rel.StandardConvertletTable.convertCall(StandardConvertletTable.java:928)
 ~[flink-table_2.11-1.3.1.jar:1.3.1]
... 50 common frames omitted
{code}

  was:
>From 
>http://search-hadoop.com/m/Flink/VkLeQbjp59T6PkF?subj=+How+can+I+set+charset+for+flink+sql+

The following query
{code}
SELECT
'HIGH' AS LEVEL,
'Firewall uplink bandwidth exception:greater than 1' AS content,
`system.process.username`,
`system.process.memory.rss.bytes`
FROM
test
WHERE
`system.process.username` LIKE '%高危%'
{code}
produces the following:
{code}
Caused by: org.apache.calcite.runtime.CalciteException: Failed to encode '%高危%' 
in character set 'ISO-8859-1'
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method) ~[na:1.8.0_45]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown 
Source) ~[na:1.8.0_45]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
Source) ~[na:1.8.0_45]
at java.lang.reflect.Constructor.newInstance(Unknown Source) 
~[na:1.8.0_45]
at 
org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:463) 
~[flink-table_2.11-1.3.1.jar:1.3.1]
at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:572) 
~[flink-table_2.11-1.3.1.jar:1.3.1]
at org.apache.calcite.util.NlsString.(NlsString.java:81) 
~[flink-table_2.11-1.3.1.jar:1.3.1]
at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:864) 
~[flink-table_2.11-1.3.1.jar:1.3.1]
at 
org.apache.calcite.rex.RexBuilder.makeCharLiteral(RexBuilder.java:1051) 
~[flink-table_2.11-1.3.1.jar:1.3.1]
at 
org.apache.calcite.sql2rel.SqlNodeToRexConverterImpl.convertLiteral(SqlNodeToRexConverterImpl.java:117)
 ~[flink-table_2.11-1.3.1.jar:1.3.1]
at 
org.apache.calcite.sql2rel.SqlToRelConverter$Blackboard.visit(SqlToRelConverter.java:4408)
 ~[flink-table_2.11-1.3.1.jar:1.3.1]
at 
org.apache.calcite.sql2rel.SqlToRelConverter$Blackboard.visit(SqlToRelConverter.java:3787)
 ~[flink-table_2.11-1.3.1.jar:1.3.1]
at org.apache.calcite.sql.SqlLiteral.accept(SqlLiteral.java:427) 
~[flink-table_2.11-1.3.1.jar:1.3.1]
at 
org.apache.calcite.sql2rel.SqlTo

[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-07 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16572092#comment-16572092
 ] 

Ted Yu commented on HBASE-19008:


[~reidchan] [~Jan Hentschel]:
Can you take a look ?

> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>    Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-2.patch, 
> HBASE-19008-3.patch, HBASE-19008-4.patch, HBASE-19008-5.patch, 
> HBASE-19008-6.patch, HBASE-19008-7.patch, HBASE-19008-8.patch, 
> HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21018) RS crashed because AsyncFS was unable to update HDFS data encryption key

2018-08-07 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16572082#comment-16572082
 ] 

Ted Yu commented on HBASE-21018:


+1

> RS crashed because AsyncFS was unable to update HDFS data encryption key
> 
>
> Key: HBASE-21018
> URL: https://issues.apache.org/jira/browse/HBASE-21018
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0
> Environment: Hadoop 3.0.0, HBase 2.0.0, 
> HDFS configuration dfs.encrypt.data.transfer = true
>Reporter: Wei-Chiu Chuang
>Priority: Critical
> Attachments: HBASE-21018.master.001.patch
>
>
> We (+[~uagashe]) found HBase RegionServer doesn't update HDFS data encryption 
> key correctly, and in some cases after retry 10 times, it aborts.
> {noformat}
> 2018-08-03 17:37:03,233 WARN 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper: create 
> fan-out dfs output 
> /hbase/WALs/rs1.example.com,22101,1533318719239/rs1.example.com%2C22101%2C1533318719239.rs1.example.com%2C22101%2C1533318719239.regiongroup-0.1533343022981
>  failed, retry = 1
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=1685436998) doesn't exist. Current key: 1085959374
> at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper$SaslNegotiateHandler.check(FanOutOneBlockAsyncDFSOutputSaslHelper.java:399)
> at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper$SaslNegotiateHandler.channelRead(FanOutOneBlockAsyncDFSOutputSaslHelper.java:470)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipel

[jira] [Updated] (HBASE-21018) RS crashed because AsyncFS was unable to update HDFS data encryption key

2018-08-07 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21018:
---
Status: Patch Available  (was: Open)

> RS crashed because AsyncFS was unable to update HDFS data encryption key
> 
>
> Key: HBASE-21018
> URL: https://issues.apache.org/jira/browse/HBASE-21018
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0
> Environment: Hadoop 3.0.0, HBase 2.0.0, 
> HDFS configuration dfs.encrypt.data.transfer = true
>Reporter: Wei-Chiu Chuang
>Priority: Critical
> Attachments: HBASE-21018.master.001.patch
>
>
> We (+[~uagashe]) found HBase RegionServer doesn't update HDFS data encryption 
> key correctly, and in some cases after retry 10 times, it aborts.
> {noformat}
> 2018-08-03 17:37:03,233 WARN 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper: create 
> fan-out dfs output 
> /hbase/WALs/rs1.example.com,22101,1533318719239/rs1.example.com%2C22101%2C1533318719239.rs1.example.com%2C22101%2C1533318719239.regiongroup-0.1533343022981
>  failed, retry = 1
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=1685436998) doesn't exist. Current key: 1085959374
> at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper$SaslNegotiateHandler.check(FanOutOneBlockAsyncDFSOutputSaslHelper.java:399)
> at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper$SaslNegotiateHandler.channelRead(FanOutOneBlockAsyncDFSOutputSaslHelper.java:470)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
> at 
> org.apache.hbase.thir

Re: [VOTE] KIP-346 - Improve LogCleaner behavior on error

2018-08-07 Thread Ted Yu
+1

On Tue, Aug 7, 2018 at 5:25 AM Thomas Becker  wrote:

> +1 (non-binding)
>
> We've hit issues with the log cleaner in the past, and this would be a
> great improvement.
> On Tue, 2018-08-07 at 12:19 +0100, Stanislav Kozlovski wrote:
>
> Hey everybody,
>
> I'm starting a vote on KIP-346
>
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-346+-+Improve+LogCleaner+behavior+on+error
> >
>
>
>
> 
>
> This email and any attachments may contain confidential and privileged
> material for the sole use of the intended recipient. Any review, copying,
> or distribution of this email (or any attachments) by others is prohibited.
> If you are not the intended recipient, please contact the sender
> immediately and permanently delete this email and any attachments. No
> employee or agent of TiVo Inc. is authorized to conclude any binding
> agreement on behalf of TiVo Inc. by email. Binding agreements with TiVo
> Inc. may only be made by a signed written agreement.
>


[jira] [Commented] (HBASE-21021) Result returned by Append operation should be ordered

2018-08-07 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16571906#comment-16571906
 ] 

Ted Yu commented on HBASE-21021:


The test on master branch fails too:
{code}
testAppendWithMultipleFamilies(org.apache.hadoop.hbase.regionserver.TestAtomicOperation)
  Time elapsed: 2.859 sec  <<< FAILURE!
java.lang.AssertionError: expected null, but was:
at 
org.apache.hadoop.hbase.regionserver.TestAtomicOperation.testAppendWithMultipleFamilies(TestAtomicOperation.java:166)
{code}
Can you attach patch for master branch ?

> Result returned by Append operation should be ordered
> -
>
> Key: HBASE-21021
> URL: https://issues.apache.org/jira/browse/HBASE-21021
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.5.0
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-21021.branch-1.001.patch
>
>
> *Problem:*
> The result returned by the append operation should be ordered. Currently, it 
> returns an unordered list, which may cause problems like if the user tries to 
> perform Result.getValue(byte[] family, byte[] qualifier), even if the 
> returned result has a value corresponding to (family, qualifier), the method 
> may return null as it performs a binary search over the  unsorted result 
> (which should have been sorted actually).
>  
> The result is enumerated by iterating over each entry of tempMemstore hashmap 
> (which will never be ordered) and adding the values (see 
> [HRegion.java#L7882|https://github.com/apache/hbase/blob/1b50fe53724aa62a242b7f64adf7845048df/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L7882]).
>  
> *Actual:* The returned result is unordered
> *Expected:* Similar to increment op, the returned result should be ordered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


<    9   10   11   12   13   14   15   16   17   18   >