[jira] [Created] (HBASE-13828) Add group permissions coverage to AC.

2015-06-02 Thread Srikanth Srungarapu (JIRA)
Srikanth Srungarapu created HBASE-13828:
---

 Summary: Add group permissions coverage to AC.
 Key: HBASE-13828
 URL: https://issues.apache.org/jira/browse/HBASE-13828
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu


We suffered a regression HBASE-13826 recently due to lack of testing coverage 
for group permissions for AC. With the recent perf boost provided by 
HBASE-13658, it wouldn't be a bad idea to add checks for group level users to 
applicable unit tests in TestAccessController.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13827) Delayed scanner close in KeyValueHeap and StoreScanner

2015-06-02 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-13827:
--

 Summary: Delayed scanner close in KeyValueHeap and StoreScanner
 Key: HBASE-13827
 URL: https://issues.apache.org/jira/browse/HBASE-13827
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0


This is to support the work in HBASE-12295. We have to return the blocks when 
the close() happens on the HFileScanner.  Right now close is not at all there. 
Will add. The StoreFileScanner will call it on its close().

In KVHeap when we see one of the child scanner runs out of cells, we will 
remove them from the PriorityQueue as well as close it. Also the same kind of 
stuff in StoreScanner too. But when we want to do the return block in close() 
this kind of early close is not correct.  Still there might be cells created 
out of these cached blocks.
This Jira aims at changing these container scanners not to do early close. When 
it seems a child scanner no longer required, it will avoid using it completely 
but just wont call close(). Instead it will be added to another list for a 
delayed close and that will be closed when the container scanner close() 
happens.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-13574) Broken TestHBaseFsck in master with hadoop 2.6.0

2015-06-02 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang resolved HBASE-13574.

   Resolution: Duplicate
Fix Version/s: 1.1.1
   1.2.0
   2.0.0

The fix in HBASE-13732 should resolve this issue.  In Windows env that has 
HADOOP-2.7, without patch, this test would fail 100%; with this patch, the test 
passed.

> Broken TestHBaseFsck in master with hadoop 2.6.0
> 
>
> Key: HBASE-13574
> URL: https://issues.apache.org/jira/browse/HBASE-13574
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: HBASE-13574.patch
>
>
> Got following exception and it reproducible (I can see it in recent tests 
> runs from other patches).
> {noformat}
> Running org.apache.hadoop.hbase.util.TestHBaseFsck
> Tests run: 51, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 348.628 sec 
> <<< FAILURE! - in org.apache.hadoop.hbase.util.TestHBaseFsck
> testParallelWithRetriesHbck(org.apache.hadoop.hbase.util.TestHBaseFsck)  Time 
> elapsed: 30.052 sec  <<< ERROR!
> java.util.concurrent.ExecutionException: java.io.IOException: Duplicate hbck 
> - Abort
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:188)
>   at 
> org.apache.hadoop.hbase.util.TestHBaseFsck.testParallelWithRetriesHbck(TestHBaseFsck.java:634)
> Caused by: java.io.IOException: Duplicate hbck - Abort
>   at org.apache.hadoop.hbase.util.HBaseFsck.connect(HBaseFsck.java:473)
>   at 
> org.apache.hadoop.hbase.util.hbck.HbckTestingUtil.doFsck(HbckTestingUtil.java:53)
>   at 
> org.apache.hadoop.hbase.util.hbck.HbckTestingUtil.doFsck(HbckTestingUtil.java:43)
>   at 
> org.apache.hadoop.hbase.util.hbck.HbckTestingUtil.doFsck(HbckTestingUtil.java:38)
>   at 
> org.apache.hadoop.hbase.util.TestHBaseFsck$2RunHbck.call(TestHBaseFsck.java:625)
>   at 
> org.apache.hadoop.hbase.util.TestHBaseFsck$2RunHbck.call(TestHBaseFsck.java:621)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: PhoenixIOException resolved only after compaction, is there a way to avoid it?

2015-06-02 Thread Siva
Thanks Vlad for your response.

After the changing the below parameters from their default values in
hbase-site.xml, queries are working fine.


   hbase.regionserver.lease.period
   120


   hbase.rpc.timeout
   120



hbase.client.scanner.caching
1000


Still there are few quires taking a lot of time. Join between two tables is
taking more than 5 mins with filter condition, if i omit the filter
condition query is failing at all.

table1 - 5.5M records -- 2 GB of compressed data
table2 - 8.5M records -- 2 GB of compressed data.

We have 2 GB of heap space on 4 region servers. 2GB of heap space on
master. No activity is going on the cluster when I was running the queries.

Do you recommend any of the parameters to tune memory and GC for Phoenix
and Hbase?

Thanks,
Siva.

On Mon, Jun 1, 2015 at 1:14 PM, Vladimir Rodionov 
wrote:

> >> Is IO exception is because of Phoenix
> >> not able to read from multiple regions since error was resolved after
> the
> >> compaction? or Any other thoughts?
>
> Compaction does not decrease # of regions - it sorts/merges data into a
> single file (in case of a major compaction) for every
> region/column_family. SocketTimeout exception is probably because Phoenix
> must read data from multiple files in
> every region before compaction - this requires more CPU, more RAM and
> produces more temp garbage.
> Excessive GC activity, in turn, results in socket timeouts. Check GC logs
> in RS and check RS logs for other errors -
> they will probably give you a clue on what is going on during a query
> execution.
>
> -Vlad
>
>
>
> On Mon, Jun 1, 2015 at 11:10 AM, Siva  wrote:
>
>> Hi Everyone,
>>
>> We load the data to Hbase tables through BulkImports.
>>
>> If the data set is small, we can query the imported data from phoenix with
>> no issues.
>>
>> If data size is huge (with respect to our cluster, we have very small
>> cluster), I m encountering the following error
>> (org.apache.phoenix.exception.PhoenixIOException).
>>
>> 0: jdbc:phoenix:172.31.45.176:2181:/hbase> selectcount(*)
>> . . . . . . . . . . . . . . . . . . . . .>from  "ldll_compression"
>>  ldll join "ds_compression"  ds on (ds."statusid" = ldll."statusid")
>> . . . . . . . . . . . . . . . . . . . . .>where ldll."logdate"  >=
>> '2015-02-04'
>> . . . . . . . . . . . . . . . . . . . . .>and  ldll."logdate"  <=
>> '2015-02-06'
>> . . . . . . . . . . . . . . . . . . . . .>and ldll."dbname" =
>> 'lmguaranteedrate';
>> +--+
>> | COUNT(1) |
>> +--+
>> java.lang.RuntimeException:
>> org.apache.phoenix.exception.PhoenixIOException:
>> org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36,
>> exceptions:
>> Mon Jun 01 13:50:57 EDT 2015, null, java.net.SocketTimeoutException:
>> callTimeout=6, callDuration=62358: row '' on table 'ldll_compression'
>> at
>> region=ldll_compression,,1432851434288.1a8b511def7d0c9e69a5491c6330d715.,
>> hostname=ip-172-31-32-181.us-west-2.compute.internal,60020,1432768597149,
>> seqNum=16566
>>
>> at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2440)
>> at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2074)
>> at sqlline.SqlLine.print(SqlLine.java:1735)
>> at sqlline.SqlLine$Commands.execute(SqlLine.java:3683)
>> at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
>> at sqlline.SqlLine.dispatch(SqlLine.java:821)
>> at sqlline.SqlLine.begin(SqlLine.java:699)
>> at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
>> at sqlline.SqlLine.main(SqlLine.java:424)
>>
>> I did the major compaction for "ldll_compression" through Hbase
>> shell(major_compact 'ldll_compression'). Same query ran successfully after
>> the compaction.
>>
>> 0: jdbc:phoenix:172.31.45.176:2181:/hbase> selectcount(*)
>> . . . . . . . . . . . . . . . . . . . . .>from  "ldll_compression"
>>  ldll join "ds_compression"  ds on (ds."statusid" = ldll."statusid")
>> . . . . . . . . . . . . . . . . . . . . .>where ldll."logdate"  >=
>> '2015-02-04'
>> . . . . . . . . . . . . . . . . . . . . .>and  ldll."logdate"  <=
>> '2015-02-06'
>> . . . . . . . . . . . . . . . . . . . . .>and ldll."dbname" =
>> 'lmguaranteedrate'
>> . . . . . . . . . . . . . . . . . . . . .> ;
>> +--+
>> | COUNT(1) |
>> +--+
>> | 13480|
>> +--+
>> 1 row selected (72.36 seconds)
>>
>> Did anyone face the similar issue? Is IO exception is because of Phoenix
>> not able to read from multiple regions since error was resolved after the
>> compaction? or Any other thoughts?
>>
>> Thanks,
>> Siva.
>>
>
>


[jira] [Created] (HBASE-13826) Unable to create table when group acls are appropriately set.

2015-06-02 Thread Srikanth Srungarapu (JIRA)
Srikanth Srungarapu created HBASE-13826:
---

 Summary: Unable to create table when group acls are appropriately 
set.
 Key: HBASE-13826
 URL: https://issues.apache.org/jira/browse/HBASE-13826
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 1.0.2, 1.2.0, 1.1.1
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1
 Attachments: HBASE-13826.patch

Steps for reproducing the issue.
- Create user 'test' and group 'hbase-admin'.
- Grant global create permissions to 'hbase-admin'.
- Add user 'test' to 'hbase-admin' group.
- Create table operation for 'test' user will throw ADE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Spinning 1.1.1 RC0 on Monday (June 8)

2015-06-02 Thread Nick Dimiduk
I've commented on the ticket; this is new functionality and so is
inappropriate for branch-1.0 and branch-1.1.

On Mon, Jun 1, 2015 at 7:50 PM, Ted Yu  wrote:

> Nick:
> Do you think HBASE-13356 has a chance to make this release ?
>
> Cheers
>
> On Mon, Jun 1, 2015 at 10:19 AM, Nick Dimiduk  wrote:
>
> > Greetings devs,
> >
> > It's getting to be that time: I plan to spin the first 1.1.1 RC on
> Monday.
> > Remember, this is a patch release in accordance with the semantic version
> > guidelines, so only mutually compatible bug fixes are accepted. Let me
> know
> > if you have any doubts.
> >
> > Thanks,
> > Nick
> >
>


[jira] [Resolved] (HBASE-13804) Revert the changes in pom.xml

2015-06-02 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh resolved HBASE-13804.

  Resolution: Fixed
Hadoop Flags: Reviewed

committed.  thanks jingcheng and anoop

> Revert the changes in pom.xml
> -
>
> Key: HBASE-13804
> URL: https://issues.apache.org/jira/browse/HBASE-13804
> Project: HBase
>  Issue Type: Sub-task
>  Components: mob
>Affects Versions: hbase-11339
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Fix For: hbase-11339
>
> Attachments: HBASE-13804.diff
>
>
> Some code were delete in pom.xml.
> {noformat}
> 
>target/jacoco.exec
> 
> {noformat}
> We can revert the changes if this change is not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-06-02 Thread Dev Lakhani (JIRA)
Dev Lakhani created HBASE-13825:
---

 Summary: Get operations on large objects fail with protocol errors
 Key: HBASE-13825
 URL: https://issues.apache.org/jira/browse/HBASE-13825
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.1, 1.0.0
Reporter: Dev Lakhani


When performing a get operation on a column family with more than 64MB of data, 
the operation fails with:

Caused by: Portable(java.io.IOException): Call to host:port failed on local 
exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message 
was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to 
increase the size limit.
at 
org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
at 
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
at 
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
at 
org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)

This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
that issues is related to cluster status. 

Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


HBASE-13337

2015-06-02 Thread Samir Ahmic
Hi all,

Can someone take look at
https://issues.apache.org/jira/browse/HBASE-13337

Regards
Samir


[jira] [Resolved] (HBASE-13818) manual region split from HBase shell, I found that split command acts incorrectly with hex split keys

2015-06-02 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi resolved HBASE-13818.
---
Resolution: Invalid

[~kukuzidian] please use the user mailing lists from 
http://hbase.apache.org/mail-lists.html if you facing any issues rather than 
raising jira next time.

> manual region split from HBase shell, I found that split command acts 
> incorrectly with hex split keys
> -
>
> Key: HBASE-13818
> URL: https://issues.apache.org/jira/browse/HBASE-13818
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.96.2
>Reporter: zhangjg
>
> manual region split from HBase shell, I found that split command acts 
> incorrectly with hex split keys
> hbase(main):001:0> split 
> 'sdb,\x00\x00+Ug\xD60\x00\x00\x01\x00\x10\xC0,1432909366893.6b601fa4eb9e1244d049bde93e340736.'
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/data/xiaoju/hbase-0.96.2-hadoop2/lib/phoenix-4.1.0-client-hadoop2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/data/xiaoju/hbase-0.96.2-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/data/xiaoju/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> 2015-06-01 11:40:46,986 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> ERROR: Illegal character code:44, <,> at 3. User-space table qualifiers can 
> only contain 'alphanumeric characters': i.e. [a-zA-Z_0-9-.]: 
> sdb,"\x00\x00+Ug\xD60\x00\x00\x01\x00\x10\xC0",1432909366893.6b601fa4eb9e1244d049bde93e340736.
> Here is some help for this command:
> Split entire table or pass a region to split individual region.  With the 
> second parameter, you can specify an explicit split key for the region.  
> Examples:
> split 'tableName'
> split 'namespace:tableName'
> split 'regionName' # format: 'tableName,startKey,id'
> split 'tableName', 'splitKey'
> split 'regionName', 'splitKey'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-13647) Default value for hbase.client.operation.timeout is too high

2015-06-02 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev resolved HBASE-13647.
--
Resolution: Fixed

done, thank you for reviewing.

> Default value for hbase.client.operation.timeout is too high
> 
>
> Key: HBASE-13647
> URL: https://issues.apache.org/jira/browse/HBASE-13647
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.0.1, 0.98.13, 1.2.0, 1.1.1
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
>Priority: Blocker
> Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1, 0.98.13
>
> Attachments: HBASE-13647.patch, HBASE-13647.v2.patch, 
> HBASE-13647.v3.patch
>
>
> Default value for hbase.client.operation.timeout is too high, it is LONG.Max.
> That value will block any service calls to coprocessor endpoints indefinitely.
> Should we introduce better default value for that?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)