Re: [ANNOUNCE] New HBase committer Peter Somogyi
Congratulations, Peter. On Fri, Feb 23, 2018 at 7:46 AM, Jerry Hewrote: > Congrats, Peter! > > On Thu, Feb 22, 2018 at 2:53 PM, Andrew Purtell > wrote: > > > Congratulations and welcome, Peter! > > > > > > On Thu, Feb 22, 2018 at 11:08 AM, Sean Busbey wrote: > > > > > On behalf of the Apache HBase PMC, I am pleased to announce that Peter > > > Somogyi has accepted the PMC's invitation to become a committer on the > > > project. > > > > > > We appreciate all of Peter's great work thus far and look forward to > > > continued involvement. > > > > > > Please join me in congratulating Peter! > > > > > > > > > > > -- > > Best regards, > > Andrew > > > > Words like orphans lost among the crosstalk, meaning torn from truth's > > decrepit hands > >- A23, Crosstalk > > > -- == Openinx blog : http://openinx.github.io TO BE A GREAT HACKER ! ==
Re: [ANNOUNCE] New HBase committer Peter Somogyi
Congrats, Peter! On Thu, Feb 22, 2018 at 2:53 PM, Andrew Purtellwrote: > Congratulations and welcome, Peter! > > > On Thu, Feb 22, 2018 at 11:08 AM, Sean Busbey wrote: > > > On behalf of the Apache HBase PMC, I am pleased to announce that Peter > > Somogyi has accepted the PMC's invitation to become a committer on the > > project. > > > > We appreciate all of Peter's great work thus far and look forward to > > continued involvement. > > > > Please join me in congratulating Peter! > > > > > > -- > Best regards, > Andrew > > Words like orphans lost among the crosstalk, meaning torn from truth's > decrepit hands >- A23, Crosstalk >
Re: [ANNOUNCE] New HBase committer Peter Somogyi
Congratulations and welcome, Peter! On Thu, Feb 22, 2018 at 11:08 AM, Sean Busbeywrote: > On behalf of the Apache HBase PMC, I am pleased to announce that Peter > Somogyi has accepted the PMC's invitation to become a committer on the > project. > > We appreciate all of Peter's great work thus far and look forward to > continued involvement. > > Please join me in congratulating Peter! > -- Best regards, Andrew Words like orphans lost among the crosstalk, meaning torn from truth's decrepit hands - A23, Crosstalk
Re: HBaseTestingUtility with visibility labels enabled
labels table is created by VisibilityController#postStartMaster(). You can add the following call in the @BeforeClass method: TEST_UTIL.waitTableEnabled(LABELS_TABLE_NAME.getName(), 5); See TestVisibilityLabelsWithACL for complete example. On Thu, Feb 22, 2018 at 12:07 PM, Mike Thomsenwrote: > I'm trying to spin up a mini cluster for integration testing. Can someone > give me an idea of what I'm doing wrong? > > public static void main(String[] args) throws Throwable { > > Configuration conf = > org.apache.hadoop.hbase.HBaseConfiguration.create(); > conf.set("hbase.coprocessor.region.classes", > "org.apache.hadoop.hbase.security.visibility.VisibilityController"); > conf.set("hbase.coprocessor.master.classes", > "org.apache.hadoop.hbase.security.visibility.VisibilityController"); > > utility = new HBaseTestingUtility(conf); > > utility.startMiniCluster(); > > VisibilityClient.addLabels(utility.getConnection(), new String[]{ > "X", "Y", "Z" }); > } > > That results in this: > > org.apache.hadoop.hbase.TableNotFoundException: hbase:labels > > Thanks, > > Mike >
HBaseTestingUtility with visibility labels enabled
I'm trying to spin up a mini cluster for integration testing. Can someone give me an idea of what I'm doing wrong? public static void main(String[] args) throws Throwable { Configuration conf = org.apache.hadoop.hbase.HBaseConfiguration.create(); conf.set("hbase.coprocessor.region.classes", "org.apache.hadoop.hbase.security.visibility.VisibilityController"); conf.set("hbase.coprocessor.master.classes", "org.apache.hadoop.hbase.security.visibility.VisibilityController"); utility = new HBaseTestingUtility(conf); utility.startMiniCluster(); VisibilityClient.addLabels(utility.getConnection(), new String[]{ "X", "Y", "Z" }); } That results in this: org.apache.hadoop.hbase.TableNotFoundException: hbase:labels Thanks, Mike
Re: [ANNOUNCE] New HBase committer Peter Somogyi
Congrats Peter! On Thu, Feb 22, 2018 at 2:08 PM, Sean Busbeywrote: > On behalf of the Apache HBase PMC, I am pleased to announce that Peter > Somogyi has accepted the PMC's invitation to become a committer on the > project. > > We appreciate all of Peter's great work thus far and look forward to > continued involvement. > > Please join me in congratulating Peter! >
[ANNOUNCE] New HBase committer Peter Somogyi
On behalf of the Apache HBase PMC, I am pleased to announce that Peter Somogyi has accepted the PMC's invitation to become a committer on the project. We appreciate all of Peter's great work thus far and look forward to continued involvement. Please join me in congratulating Peter!
Re: How to recover a table
It seems there were 3 files on s3 (they're all on the same line). If possible, can you pastebin parts of master log which were related to the table ? That may give us more clue. On Thu, Feb 22, 2018 at 10:01 AM, Vikas Kanth < kanth_vi...@yahoo.co.in.invalid> wrote: > Hi Ted, > Thanks for replying.I can see the descriptors under the table: > s3://mybucket/hbasedir/data/db1/mytable/.tabledesc/. > tableinfo.01s3://mybucket/hbasedir/data/db1/ > mytable/.tabledescs3://mybucket/hbasedir/data/db1/mytable/.tmp > > This is what I see in the HBM logs: > 2018-02-21 09:02:29,918 WARN [x,16000,1519199162102_ChoreService_3] > master.CatalogJanitor: CatalogJanitor disabled! Not running scan. > $ hbase versionHBase 1.3.1 > Thanks
Re: Region not initializing in 2.0.0-beta-1
This sounds like something I've seen in the past but was unable to get past. I think I was seeing it when the hbase-shaded-client was on the classpath. Could you see if the presence of that artifact makes a difference one way or another? On 2/22/18 12:52 PM, sahil aggarwal wrote: Yes, it is a clean setup. Here are logs on region startup 2018-02-22 22:17:22,259 DEBUG [main] zookeeper.ClientCnxn: zookeeper.disableAutoWatchReset is false 2018-02-22 22:17:22,401 INFO [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Opening socket connection to server perf-zk-1/ 10.33.225.67:2181. Will not attempt to authenticate using SASL (unknown error) 2018-02-22 22:17:22,407 INFO [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Socket connection established to perf-zk-1/ 10.33.225.67:2181, initiating session 2018-02-22 22:17:22,409 DEBUG [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Session establishment request sent on perf-zk-1/ 10.33.225.67:2181 2018-02-22 22:17:22,415 INFO [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Session establishment complete on server perf-zk-1/ 10.33.225.67:2181, sessionid = 0x36146d5de4467de, negotiated timeout = 2 2018-02-22 22:17:22,423 DEBUG [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x36146d5de4467de, packet:: clientPath:null serverPath:null finished:false header:: 1,3 replyHeader:: 1,111751355931,0 request:: '/hbase-unsecure2/master,T response:: s{111750564873,111750564873,1519309851875,1519309851875,0,0,0,171496145189271239,74,0,111750564873} 2018-02-22 22:17:22,426 DEBUG [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x36146d5de4467de, packet:: clientPath:null serverPath:null finished:false header:: 2,4 replyHeader:: 2,111751355931,0 request:: '/hbase-unsecure2/master,T response:: #000146d61737465723a36303030304c11fff11646ffe12effd450425546a25a18706572662d636f736d6f732d686e6e2d612d33363433363010ffe0ffd4318ff9fff88ffb2ffefff9b2c10018ffeaffd43,s{111750564873,111750564873,1519309851875,1519309851875,0,0,0,171496145189271239,74,0,111750564873} 2018-02-22 22:17:22,428 DEBUG [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x36146d5de4467de, packet:: clientPath:null serverPath:null finished:false header:: 3,3 replyHeader:: 3,111751355931,0 request:: '/hbase-unsecure2/running,T response:: s{111750565002,111750565002,1519309853317,1519309853317,0,0,0,0,59,0,111750565002} 2018-02-22 22:17:22,430 DEBUG [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x36146d5de4467de, packet:: clientPath:null serverPath:null finished:false header:: 4,4 replyHeader:: 4,111751355931,0 request:: '/hbase-unsecure2/running,T response:: #000146d61737465723a363030303021ffea7f3eff8a28576450425546a1c546875204665622032322032303a30303a3533204953542032303138,s{111750565002,111750565002,1519309853317,1519309853317,0,0,0,0,59,0,111750565002} 2018-02-22 22:17:22,459 DEBUG [main] ipc.RpcExecutor: Started 0 default.FPBQ.Fifo handlers, qsize=10 on port=16020 2018-02-22 22:17:22,475 DEBUG [main] ipc.RpcExecutor: Started 0 priority.FPBQ.Fifo handlers, qsize=2 on port=16020 2018-02-22 22:17:22,478 DEBUG [main] ipc.RpcExecutor: Started 0 replication.FPBQ.Fifo handlers, qsize=1 on port=16020 2018-02-22 22:17:22,524 INFO [main] util.log: Logging initialized @3325ms 2018-02-22 22:17:22,625 INFO [main] http.HttpRequestLog: Http request log for http.requests.regionserver is not defined 2018-02-22 22:17:22,651 INFO [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2018-02-22 22:17:22,651 INFO [main] http.HttpServer: Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2018-02-22 22:17:22,654 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2018-02-22 22:17:22,654 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2018-02-22 22:17:22,654 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2018-02-22 22:17:22,691 INFO [main] http.HttpServer: Jetty bound to port 60030 2018-02-22 22:17:22,693 INFO [main] server.Server: jetty-9.3.19.v20170502 2018-02-22 22:17:22,765 INFO [main] handler.ContextHandler: Started o.e.j.s.ServletContextHandler@7435a578 {/logs,file:///var/log/hbase/,AVAILABLE} 2018-02-22 22:17:22,765 INFO [main] handler.ContextHandler: Started o.e.j.s.ServletContextHandler@13047d7d {/static,file:///usr/lib/hbase/hbase-webapps/static/,AVAILABLE} 2018-02-22 22:17:22,912 INFO [main] handler.ContextHandler: Started
Re: How to recover a table
Hi Ted, Thanks for replying.I can see the descriptors under the table: s3://mybucket/hbasedir/data/db1/mytable/.tabledesc/.tableinfo.01s3://mybucket/hbasedir/data/db1/mytable/.tabledescs3://mybucket/hbasedir/data/db1/mytable/.tmp This is what I see in the HBM logs: 2018-02-21 09:02:29,918 WARNÂ [x,16000,1519199162102_ChoreService_3] master.CatalogJanitor: CatalogJanitor disabled! Not running scan. $ hbase versionHBase 1.3.1 Thanks
Re: Region not initializing in 2.0.0-beta-1
Yes, it is a clean setup. Here are logs on region startup 2018-02-22 22:17:22,259 DEBUG [main] zookeeper.ClientCnxn: zookeeper.disableAutoWatchReset is false 2018-02-22 22:17:22,401 INFO [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Opening socket connection to server perf-zk-1/ 10.33.225.67:2181. Will not attempt to authenticate using SASL (unknown error) 2018-02-22 22:17:22,407 INFO [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Socket connection established to perf-zk-1/ 10.33.225.67:2181, initiating session 2018-02-22 22:17:22,409 DEBUG [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Session establishment request sent on perf-zk-1/ 10.33.225.67:2181 2018-02-22 22:17:22,415 INFO [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Session establishment complete on server perf-zk-1/ 10.33.225.67:2181, sessionid = 0x36146d5de4467de, negotiated timeout = 2 2018-02-22 22:17:22,423 DEBUG [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x36146d5de4467de, packet:: clientPath:null serverPath:null finished:false header:: 1,3 replyHeader:: 1,111751355931,0 request:: '/hbase-unsecure2/master,T response:: s{111750564873,111750564873,1519309851875,1519309851875,0,0,0,171496145189271239,74,0,111750564873} 2018-02-22 22:17:22,426 DEBUG [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x36146d5de4467de, packet:: clientPath:null serverPath:null finished:false header:: 2,4 replyHeader:: 2,111751355931,0 request:: '/hbase-unsecure2/master,T response:: #000146d61737465723a36303030304c11fff11646ffe12effd450425546a25a18706572662d636f736d6f732d686e6e2d612d33363433363010ffe0ffd4318ff9fff88ffb2ffefff9b2c10018ffeaffd43,s{111750564873,111750564873,1519309851875,1519309851875,0,0,0,171496145189271239,74,0,111750564873} 2018-02-22 22:17:22,428 DEBUG [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x36146d5de4467de, packet:: clientPath:null serverPath:null finished:false header:: 3,3 replyHeader:: 3,111751355931,0 request:: '/hbase-unsecure2/running,T response:: s{111750565002,111750565002,1519309853317,1519309853317,0,0,0,0,59,0,111750565002} 2018-02-22 22:17:22,430 DEBUG [main-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x36146d5de4467de, packet:: clientPath:null serverPath:null finished:false header:: 4,4 replyHeader:: 4,111751355931,0 request:: '/hbase-unsecure2/running,T response:: #000146d61737465723a363030303021ffea7f3eff8a28576450425546a1c546875204665622032322032303a30303a3533204953542032303138,s{111750565002,111750565002,1519309853317,1519309853317,0,0,0,0,59,0,111750565002} 2018-02-22 22:17:22,459 DEBUG [main] ipc.RpcExecutor: Started 0 default.FPBQ.Fifo handlers, qsize=10 on port=16020 2018-02-22 22:17:22,475 DEBUG [main] ipc.RpcExecutor: Started 0 priority.FPBQ.Fifo handlers, qsize=2 on port=16020 2018-02-22 22:17:22,478 DEBUG [main] ipc.RpcExecutor: Started 0 replication.FPBQ.Fifo handlers, qsize=1 on port=16020 2018-02-22 22:17:22,524 INFO [main] util.log: Logging initialized @3325ms 2018-02-22 22:17:22,625 INFO [main] http.HttpRequestLog: Http request log for http.requests.regionserver is not defined 2018-02-22 22:17:22,651 INFO [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2018-02-22 22:17:22,651 INFO [main] http.HttpServer: Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2018-02-22 22:17:22,654 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2018-02-22 22:17:22,654 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2018-02-22 22:17:22,654 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2018-02-22 22:17:22,691 INFO [main] http.HttpServer: Jetty bound to port 60030 2018-02-22 22:17:22,693 INFO [main] server.Server: jetty-9.3.19.v20170502 2018-02-22 22:17:22,765 INFO [main] handler.ContextHandler: Started o.e.j.s.ServletContextHandler@7435a578 {/logs,file:///var/log/hbase/,AVAILABLE} 2018-02-22 22:17:22,765 INFO [main] handler.ContextHandler: Started o.e.j.s.ServletContextHandler@13047d7d {/static,file:///usr/lib/hbase/hbase-webapps/static/,AVAILABLE} 2018-02-22 22:17:22,912 INFO [main] handler.ContextHandler: Started o.e.j.w.WebAppContext@7428de63 {/,file:///usr/lib/hbase/hbase-webapps/regionserver/,AVAILABLE}{file:/usr/lib/hbase/hbase-webapps/regionserver} 2018-02-22 22:17:22,917 INFO [main] server.AbstractConnector: Started ServerConnector@35636217{HTTP/1.1,[http/1.1]}{0.0.0.0:60030} 2018-02-22
Re: Region not initializing in 2.0.0-beta-1
Can you show more of the region server log ? Was the cluster started clean (without any data) ? There have been a lot of changes since 2.0.0-beta-1 was released (both in terms of correctness and performance). If possible, please deploy 2.0 SNAPSHOT for further testing. Cheers On Thu, Feb 22, 2018 at 9:39 AM, sahil aggarwalwrote: > Hi, > > I am trying to get 2.0.0-beta-1 cluster up to do some perf test but not > able to get region servers up. Its stuck in initializing state. Looks like > it is stuck in getting the hbaseId from zk: > > jstack says: > > "regionserver/perf-rs-1/10.32.73.176:16020" #24 prio=5 os_prio=0 > tid=0x7f19e45cc000 nid=0x1b13 waiting on condition [0x7f19de6a7000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0001846d8b98> (a > java.util.concurrent.CompletableFuture$WaitNode) > at java.util.concurrent.locks.LockSupport.park(LockSupport. > java:175) > at > java.util.concurrent.CompletableFuture$WaitNode. > block(CompletableFuture.java:271) > at > java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3226) > at > java.util.concurrent.CompletableFuture.waitingGet( > CompletableFuture.java:319) > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2224) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId( > ConnectionImplementation.java:526) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.( > ConnectionImplementation.java:286) > at > org.apache.hadoop.hbase.client.ConnectionUtils$ > ShortCircuitingClusterConnection.(ConnectionUtils.java:141) > at > org.apache.hadoop.hbase.client.ConnectionUtils$ > ShortCircuitingClusterConnection.(ConnectionUtils.java:132) > at > org.apache.hadoop.hbase.client.ConnectionUtils. > createShortCircuitConnection(ConnectionUtils.java:185) > at > org.apache.hadoop.hbase.regionserver.HRegionServer. > createClusterConnection(HRegionServer.java:770) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.setupClusterConnection( > HRegionServer.java:801) > - locked <0x00019ccd1bc8> (a > org.apache.hadoop.hbase.regionserver.HRegionServer) > at > org.apache.hadoop.hbase.regionserver.HRegionServer. > preRegistrationInitialization(HRegionServer.java:816) > at > org.apache.hadoop.hbase.regionserver.HRegionServer. > run(HRegionServer.java:925) > at java.lang.Thread.run(Thread.java:745) > > > Even though it seem to got the response from zk: > > 2018-02-22 22:17:22,959 DEBUG [ReadOnlyZKClient-SendThread( > perf-zk-1:2181)] > zookeeper.ClientCnxn: Reading reply sessionid:0x26146d5de5c7181, packet:: > clientPath:/hbase-unsecure2/hbaseid serverPath:/hbase-unsecure2/hbaseid > finished:false header:: 1,4 replyHeader:: 1,111751356003,0 request:: > '/hbase-unsecure2/hbaseid,F response:: > #000146d61737465723a3630303030395e49ffefff98f > fd0262150425546a2430653037386566362d363931362d343665332d > 386335652d653237666264303135326337,s{111750564985, > 111750564985,1519309853186,1519309853186,0,0,0,0,67,0,111750564985} > > Any pointers? > > > Thanks, > Sahil >
Region not initializing in 2.0.0-beta-1
Hi, I am trying to get 2.0.0-beta-1 cluster up to do some perf test but not able to get region servers up. Its stuck in initializing state. Looks like it is stuck in getting the hbaseId from zk: jstack says: "regionserver/perf-rs-1/10.32.73.176:16020" #24 prio=5 os_prio=0 tid=0x7f19e45cc000 nid=0x1b13 waiting on condition [0x7f19de6a7000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x0001846d8b98> (a java.util.concurrent.CompletableFuture$WaitNode) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.CompletableFuture$WaitNode.block(CompletableFuture.java:271) at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3226) at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:319) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2224) at org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId(ConnectionImplementation.java:526) at org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:286) at org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.(ConnectionUtils.java:141) at org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.(ConnectionUtils.java:132) at org.apache.hadoop.hbase.client.ConnectionUtils.createShortCircuitConnection(ConnectionUtils.java:185) at org.apache.hadoop.hbase.regionserver.HRegionServer.createClusterConnection(HRegionServer.java:770) at org.apache.hadoop.hbase.regionserver.HRegionServer.setupClusterConnection(HRegionServer.java:801) - locked <0x00019ccd1bc8> (a org.apache.hadoop.hbase.regionserver.HRegionServer) at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:816) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:925) at java.lang.Thread.run(Thread.java:745) Even though it seem to got the response from zk: 2018-02-22 22:17:22,959 DEBUG [ReadOnlyZKClient-SendThread(perf-zk-1:2181)] zookeeper.ClientCnxn: Reading reply sessionid:0x26146d5de5c7181, packet:: clientPath:/hbase-unsecure2/hbaseid serverPath:/hbase-unsecure2/hbaseid finished:false header:: 1,4 replyHeader:: 1,111751356003,0 request:: '/hbase-unsecure2/hbaseid,F response:: #000146d61737465723a3630303030395e49ffefff98ffd0262150425546a2430653037386566362d363931362d343665332d386335652d653237666264303135326337,s{111750564985,111750564985,1519309853186,1519309853186,0,0,0,0,67,0,111750564985} Any pointers? Thanks, Sahil
Re: Hbase integration testing
While in the project of interest : [info] common-library/*:clean = Task[Unit] [info] +-common-library/*:clean::streams = Task[sbt.std.TaskStreams[sbt.internal.util.Init$ScopedKey[_ <: Any]]] [info] | +-*/*:streamsManager = Task[sbt.std.Streams[sbt.internal.util.Init$ScopedKey[_ <: Any]]] [info] | [info] +-common-library/*:cleanFiles = Task[scala.collection.Seq[java.io.File]] [info] | +-common-library/*:cleanKeepFiles = Vector(/home/gfeuillen/projects/my-project/common-library/target/.history) [info] | | +-common-library/*:history = Some(/home/gfeuillen/projects/my-project/common-library/target/.history) [info] | | +-common-library/*:target = common-library/target [info] | | +-common-library/*:baseDirectory = common-library [info] | | +-common-library/*:thisProject = Project(id common-library, base: /home/gfeuillen/projects/my-project/common-library, configuratio.. [info] | | [info] | +-{.}/*:managedDirectory = lib_managed [info] | +-common-library/*:target = common-library/target [info] | +-common-library/*:baseDirectory = common-library [info] | +-common-library/*:thisProject = Project(id common-library, base: /home/gfeuillen/projects/my-project/common-library, configurations: .. [info] | [info] +-common-library/*:ivyModule = Task[sbt.internal.librarymanagement.IvySbt#sbt.internal.librarymanagement.IvySbt$Module] [info] +-common-library/*:ivySbt = Task[sbt.internal.librarymanagement.IvySbt] [info] | +-*/*:credentials = Task[scala.collection.Seq[sbt.librarymanagement.ivy.Credentials]] [info] | +-common-library/*:ivyConfiguration = Task[sbt.librarymanagement.ivy.IvyConfiguration] [info] | | +-*/*:appConfiguration = xsbt.boot.AppConfiguration@163697a8 [info] | | +-common-library/*:crossTarget = common-library/target/scala-2.11 [info] | | | +-*/*:crossPaths = true [info] | | | +-common-library/*:pluginCrossBuild::sbtBinaryVersion = 1.0 [info] | | | | +-*/*:pluginCrossBuild::sbtVersion = 1.0.3 [info] | | | | [info] | | | +-*/*:sbtPlugin = false [info] | | | +-common-library/*:scalaBinaryVersion = 2.11 [info] | | | | +-common-library/*:scalaVersion = 2.11.12 [info] | | | | [info] | | | +-common-library/*:target = common-library/target [info] | | | +-common-library/*:baseDirectory = common-library [info] | | | +-common-library/*:thisProject = Project(id common-library, base: /home/gfeuillen/projects/my-project/common-library, configurat.. [info] | | | [info] | | +-common-library/*:fullResolvers = Task[scala.collection.Seq[sbt.librarymanagement.Resolver]] [info] | | | +-common-library/*:bootResolvers = Task[scala.Option[scala.collection.Seq[sbt.librarymanagement.Resolver]]] [info] | | | | +-*/*:appConfiguration = xsbt.boot.AppConfiguration@163697a8 [info] | | | | [info] | | | +-common-library/*:externalResolvers = Task[scala.collection.Seq[sbt.librarymanagement.Resolver]] [info] | | | | +-common-library/*:appResolvers = Some(Vector(FileRepository(local, Patterns(ivyPatterns=Vector(${ivy.home}/local/[organisation]/[module]/(scala_[s.. [info] | | | | | +-*/*:appConfiguration = xsbt.boot.AppConfiguration@163697a8 [info] | | | | | +-*/*:useJCenter = false [info] | | | | | [info] | | | | +-common-library/*:resolvers = Vector(Mapr maven repository: http://repository.mapr.com/maven/, Mvn repository: https://repository.jboss.org/nexus/.. [info] | | | | +-*/*:useJCenter = false [info] | | | | [info] | | | +-common-library/*:overrideBuildResolvers = false [info] | | | | +-*/*:appConfiguration = xsbt.boot.AppConfiguration@163697a8 [info] | | | | [info] | | | +-common-library/*:projectResolver = Task[sbt.librarymanagement.Resolver] [info] | | | | +-common-library/*:projectDescriptors = Task[scala.collection.immutable.Map[org.apache.ivy.core.module.id.ModuleRevisionId, org.apache.ivy.core.mod.. [info] | | | | +-*/*:buildDependencies = sbt.internal.BuildDependencies@3847fe81 [info] | | | | +-common-library/*:projectDescriptors::streams = Task[sbt.std.TaskStreams[sbt.internal.util.Init$ScopedKey[_ <: Any]]] [info] | | | | | +-*/*:streamsManager = Task[sbt.std.Streams[sbt.internal.util.Init$ScopedKey[_ <: Any]]] [info] | | | | | [info] | | | | +-*/*:settingsData = Task[sbt.internal.util.Settings[sbt.Scope]] [info] | | | | +-common-library/*:thisProjectRef = ProjectRef(file:/home/gfeuillen/projects/my-project/,common-library) [info] | | | | [info] | | | +-*/*:sbtPlugin = false [info] | | | +-*/*:sbtResolver = URLRepository(typesafe-ivy-releases, Patterns(ivyPatterns=Vector(https://repo.typesafe.com/typesafe/ivy-releases/[organisation]/[.. [info] | | | [info] | | +-common-library/*:ivyConfiguration::streams = Task[sbt.std.TaskStreams[sbt.internal.util.Init$ScopedKey[_ <: Any]]] [info] | | | +-*/*:streamsManager =