[jira] [Created] (HBASE-16298) ESAPI.properties missing in hbase-server.jar

2016-07-28 Thread JIRA
Sylvain Veyrié created HBASE-16298:
--

 Summary: ESAPI.properties missing in hbase-server.jar
 Key: HBASE-16298
 URL: https://issues.apache.org/jira/browse/HBASE-16298
 Project: HBase
  Issue Type: Bug
  Components:  Interface
Affects Versions: 1.2.2, 1.1.5
 Environment: Debian 8.2
Linux 3.16.0-4-amd64
OpenJDK 8u91-b14-3ubuntu1~16.04.1
Reporter: Sylvain Veyrié


For policy/compliance reasons, we removed the tests jars from lib/ directory on 
HBase Master. Everything was working fine from 1.0 to 1.1.3.

When I upgraded from 1.1.3 to 1.1.5, the /master-status page started to return 
an error 500: {{java.lang.IllegalArgumentException: Failed to load 
ESAPI.properties as a classloader resource.}}

After some search, I found out that ESAPI has been added following HBASE-15122 
which also added the ESAPI.properties files into src/main/resources.

However, it seems an exclusion has been put on packaging: the file is absent 
from hbase-server-1.1.5.jar, but present in hbase-server-1.1.5-tests.jar, which 
is in the lib/ directory in the tar.gz distribution.

Our workaround is to deploy back hbase-server-1.1.5-tests.jar in lib/. However, 
it does not seem right to require tests jar to have HBase master propertly work.

Even if it is the current HBase policy to keep those jars, I think the 
hbase-server.jar should contain ESAPI.properties.

The same thing applies for 1.2 branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16298) ESAPI.properties missing in hbase-server.jar

2016-07-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-16298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Veyrié updated HBASE-16298:
---
Attachment: HBASE-16298.patch

Patch based on 1.1.5. Should be applied on other branches as well.

> ESAPI.properties missing in hbase-server.jar
> 
>
> Key: HBASE-16298
> URL: https://issues.apache.org/jira/browse/HBASE-16298
> Project: HBase
>  Issue Type: Bug
>  Components:  Interface
>Affects Versions: 1.1.5, 1.2.2
> Environment: Debian 8.2
> Linux 3.16.0-4-amd64
> OpenJDK 8u91-b14-3ubuntu1~16.04.1
>Reporter: Sylvain Veyrié
> Attachments: HBASE-16298.patch
>
>
> For policy/compliance reasons, we removed the tests jars from lib/ directory 
> on HBase Master. Everything was working fine from 1.0 to 1.1.3.
> When I upgraded from 1.1.3 to 1.1.5, the /master-status page started to 
> return an error 500: {{java.lang.IllegalArgumentException: Failed to load 
> ESAPI.properties as a classloader resource.}}
> After some search, I found out that ESAPI has been added following 
> HBASE-15122 which also added the ESAPI.properties files into 
> src/main/resources.
> However, it seems an exclusion has been put on packaging: the file is absent 
> from hbase-server-1.1.5.jar, but present in hbase-server-1.1.5-tests.jar, 
> which is in the lib/ directory in the tar.gz distribution.
> Our workaround is to deploy back hbase-server-1.1.5-tests.jar in lib/. 
> However, it does not seem right to require tests jar to have HBase master 
> propertly work.
> Even if it is the current HBase policy to keep those jars, I think the 
> hbase-server.jar should contain ESAPI.properties.
> The same thing applies for 1.2 branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10092) Move up on to log4j2

2016-08-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15419920#comment-15419920
 ] 

Mikael Ståldal commented on HBASE-10092:


Hi,

I am member of Apache Logging project PMC.

Due to various issues in Log4j 1.2.x, it will no longer work in JDK9: 
http://mail.openjdk.java.net/pipermail/jigsaw-dev/2016-July/008654.html

As Log4j 1.x is EOL, these issues will not be fixed. We suggest upgrading to 
Log4j 2.x, we can offer help to do so.


> Move up on to log4j2
> 
>
> Key: HBASE-10092
> URL: https://issues.apache.org/jira/browse/HBASE-10092
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Alex Newman
> Fix For: 2.0.0
>
> Attachments: 10092.txt, 10092v2.txt, HBASE-10092-preview-v0.patch, 
> HBASE-10092.patch
>
>
> Allows logging with less friction.  See http://logging.apache.org/log4j/2.x/  
> This rather radical transition can be done w/ minor change given they have an 
> adapter for apache's logging, the one we use.  They also have and adapter for 
> slf4j so we likely can remove at least some of the 4 versions of this module 
> our dependencies make use of.
> I made a start in attached patch but am currently stuck in maven dependency 
> resolve hell courtesy of our slf4j.  Fixing will take some concentration and 
> a good net connection, an item I currently lack.  Other TODOs are that will 
> need to fix our little log level setting jsp page -- will likely have to undo 
> our use of hadoop's tool here -- and the config system changes a little.
> I will return to this project soon.  Will bring numbers.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10092) Move up on to log4j2

2016-08-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424051#comment-15424051
 ] 

Mikael Ståldal commented on HBASE-10092:


We are working on supporting Log4j 1 log4j.properties configuration files. The 
plan is to have that ready in the next release of Log4j (2.7).

> Move up on to log4j2
> 
>
> Key: HBASE-10092
> URL: https://issues.apache.org/jira/browse/HBASE-10092
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Alex Newman
> Fix For: 2.0.0
>
> Attachments: 10092.txt, 10092v2.txt, HBASE-10092-preview-v0.patch, 
> HBASE-10092.patch
>
>
> Allows logging with less friction.  See http://logging.apache.org/log4j/2.x/  
> This rather radical transition can be done w/ minor change given they have an 
> adapter for apache's logging, the one we use.  They also have and adapter for 
> slf4j so we likely can remove at least some of the 4 versions of this module 
> our dependencies make use of.
> I made a start in attached patch but am currently stuck in maven dependency 
> resolve hell courtesy of our slf4j.  Fixing will take some concentration and 
> a good net connection, an item I currently lack.  Other TODOs are that will 
> need to fix our little log level setting jsp page -- will likely have to undo 
> our use of hadoop's tool here -- and the config system changes a little.
> I will return to this project soon.  Will bring numbers.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16730) Exclude junit as a transitive dependency from hadoop-common

2016-09-29 Thread JIRA
Nils Larsgård created HBASE-16730:
-

 Summary: Exclude junit as a transitive dependency from 
hadoop-common
 Key: HBASE-16730
 URL: https://issues.apache.org/jira/browse/HBASE-16730
 Project: HBase
  Issue Type: Improvement
  Components: hbase
Reporter: Nils Larsgård
Priority: Trivial


add exclusion to the hadoop-common dependency in hbase-client: exclude junit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16730) Exclude junit as a transitive dependency from hadoop-common

2016-09-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-16730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nils Larsgård updated HBASE-16730:
--
Labels: hbase-client junit  (was: )

> Exclude junit as a transitive dependency from hadoop-common
> ---
>
> Key: HBASE-16730
> URL: https://issues.apache.org/jira/browse/HBASE-16730
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Reporter: Nils Larsgård
>Priority: Trivial
>  Labels: hbase-client, junit
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> add exclusion to the hadoop-common dependency in hbase-client: exclude junit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14903) Table Or Region?

2015-12-01 Thread JIRA
胡托 created HBASE-14903:
--

 Summary: Table Or Region?
 Key: HBASE-14903
 URL: https://issues.apache.org/jira/browse/HBASE-14903
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.0.0
Reporter: 胡托
Priority: Blocker


 I've been reading on Latest Reference Guide and try to translated into 
Chinese!
 I think this sentence "When a table is in the process of splitting," will 
be "When a Region is in the process of splitting," on chapter 【62.2. 
hbase:meta】。
 By the way,is this document the 
latest?【http://hbase.apache.org/book.html#arch.overview】I will translate it!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14903) Table Or Region?

2015-12-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

胡托 updated HBASE-14903:
---
Description: 
 I've been reading on Latest Reference Guide and try to translated into 
Chinese!
 I think this sentence "When a table is in the process of splitting," 
should be "When a Region is in the process of splitting," on chapter 【62.2. 
hbase:meta】。
 By the way,is this document the 
latest?【http://hbase.apache.org/book.html#arch.overview】I will translate it!

  was:
 I've been reading on Latest Reference Guide and try to translated into 
Chinese!
 I think this sentence "When a table is in the process of splitting," will 
be "When a Region is in the process of splitting," on chapter 【62.2. 
hbase:meta】。
 By the way,is this document the 
latest?【http://hbase.apache.org/book.html#arch.overview】I will translate it!


> Table Or Region?
> 
>
> Key: HBASE-14903
> URL: https://issues.apache.org/jira/browse/HBASE-14903
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: 胡托
>Priority: Blocker
>
>  I've been reading on Latest Reference Guide and try to translated into 
> Chinese!
>  I think this sentence "When a table is in the process of splitting," 
> should be "When a Region is in the process of splitting," on chapter 【62.2. 
> hbase:meta】。
>  By the way,is this document the 
> latest?【http://hbase.apache.org/book.html#arch.overview】I will translate it!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14903) Table Or Region?

2015-12-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15039530#comment-15039530
 ] 

胡托 commented on HBASE-14903:


Can I submit my translation stage results?How do it?Thanks

> Table Or Region?
> 
>
> Key: HBASE-14903
> URL: https://issues.apache.org/jira/browse/HBASE-14903
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: 胡托
>Priority: Blocker
>
>  I've been reading on Latest Reference Guide and try to translated into 
> Chinese!
>  I think this sentence "When a table is in the process of splitting," 
> should be "When a Region is in the process of splitting," on chapter 【62.2. 
> hbase:meta】。
>  By the way,is this document the 
> latest?【http://hbase.apache.org/book.html#arch.overview】I will translate it!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14903) Table Or Region?

2015-12-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15039531#comment-15039531
 ] 

胡托 commented on HBASE-14903:


Can I submit my translation stage results?How do it?Thanks

> Table Or Region?
> 
>
> Key: HBASE-14903
> URL: https://issues.apache.org/jira/browse/HBASE-14903
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: 胡托
>Priority: Blocker
>
>  I've been reading on Latest Reference Guide and try to translated into 
> Chinese!
>  I think this sentence "When a table is in the process of splitting," 
> should be "When a Region is in the process of splitting," on chapter 【62.2. 
> hbase:meta】。
>  By the way,is this document the 
> latest?【http://hbase.apache.org/book.html#arch.overview】I will translate it!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14903) Table Or Region?

2015-12-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15043659#comment-15043659
 ] 

胡托 commented on HBASE-14903:


How to patch it?Can you give me some guide?For example,document format?

> Table Or Region?
> 
>
> Key: HBASE-14903
> URL: https://issues.apache.org/jira/browse/HBASE-14903
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: 胡托
>Priority: Blocker
>
>  I've been reading on Latest Reference Guide and try to translated into 
> Chinese!
>  I think this sentence "When a table is in the process of splitting," 
> should be "When a Region is in the process of splitting," on chapter 【62.2. 
> hbase:meta】。
>  By the way,is this document the 
> latest?【http://hbase.apache.org/book.html#arch.overview】I will translate it!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14903) Table Or Region?

2015-12-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15043661#comment-15043661
 ] 

胡托 commented on HBASE-14903:


http://note.youdao.com/share/?id=86f628bba69e9de9170b4c0642d169c9&type=note

> Table Or Region?
> 
>
> Key: HBASE-14903
> URL: https://issues.apache.org/jira/browse/HBASE-14903
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: 胡托
>Priority: Blocker
>
>  I've been reading on Latest Reference Guide and try to translated into 
> Chinese!
>  I think this sentence "When a table is in the process of splitting," 
> should be "When a Region is in the process of splitting," on chapter 【62.2. 
> hbase:meta】。
>  By the way,is this document the 
> latest?【http://hbase.apache.org/book.html#arch.overview】I will translate it!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14903) Table Or Region?

2015-12-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15043660#comment-15043660
 ] 

胡托 commented on HBASE-14903:


http://note.youdao.com/share/?id=86f628bba69e9de9170b4c0642d169c9&type=note

> Table Or Region?
> 
>
> Key: HBASE-14903
> URL: https://issues.apache.org/jira/browse/HBASE-14903
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: 胡托
>Priority: Blocker
>
>  I've been reading on Latest Reference Guide and try to translated into 
> Chinese!
>  I think this sentence "When a table is in the process of splitting," 
> should be "When a Region is in the process of splitting," on chapter 【62.2. 
> hbase:meta】。
>  By the way,is this document the 
> latest?【http://hbase.apache.org/book.html#arch.overview】I will translate it!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14918) In-Memory MemStore Flush and Compaction

2015-12-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

再再 updated HBASE-14918:
---
Fix Version/s: 0.98.18

> In-Memory MemStore Flush and Compaction
> ---
>
> Key: HBASE-14918
> URL: https://issues.apache.org/jira/browse/HBASE-14918
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Fix For: 0.98.18
>
>
> A memstore serves as the in-memory component of a store unit, absorbing all 
> updates to the store. From time to time these updates are flushed to a file 
> on disk, where they are compacted (by eliminating redundancies) and 
> compressed (i.e., written in a compressed format to reduce their storage 
> size).
> We aim to speed up data access, and therefore suggest to apply in-memory 
> memstore flush. That is to flush the active in-memory segment into an 
> intermediate buffer where it can be accessed by the application. Data in the 
> buffer is subject to compaction and can be stored in any format that allows 
> it to take up smaller space in RAM. The less space the buffer consumes the 
> longer it can reside in memory before data is flushed to disk, resulting in 
> better performance.
> Specifically, the optimization is beneficial for workloads with 
> medium-to-high key churn which incur many redundant cells, like persistent 
> messaging. 
> We suggest to structure the solution as 4 subtasks (respectively, patches). 
> (1) Infrastructure - refactoring of the MemStore hierarchy, introducing 
> segment (StoreSegment) as first-class citizen, and decoupling memstore 
> scanner from the memstore implementation;
> (2) Adding StoreServices facility at the region level to allow memstores 
> update region counters and access region level synchronization mechanism;
> (3) Implementation of a new memstore (CompactingMemstore) with non-optimized 
> immutable segment representation, and 
> (4) Memory optimization including compressed format representation and off 
> heap allocations.
> This Jira continues the discussion in HBASE-13408.
> Design documents, evaluation results and previous patches can be found in 
> HBASE-13408. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15053) hbase.client.max.perregion.tasks can affect get or scan operation?

2015-12-29 Thread JIRA
胡托 created HBASE-15053:
--

 Summary: hbase.client.max.perregion.tasks  can  affect get or scan 
operation?
 Key: HBASE-15053
 URL: https://issues.apache.org/jira/browse/HBASE-15053
 Project: HBase
  Issue Type: Test
Reporter: 胡托


In Refrence guide,hbase.client.max.perregion.tasks is descripted  if there is 
already hbase.client.max.perregion.tasks writes in progress for this region, 
new puts won’t be sent to this region until some writes finishes.

Whether can affect the read operation?I want to know,thanks!






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15053) hbase.client.max.perregion.tasks can affect get or scan operation?

2015-12-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-15053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074757#comment-15074757
 ] 

胡托 commented on HBASE-15053:


Thank you!

> hbase.client.max.perregion.tasks  can  affect get or scan operation?
> 
>
> Key: HBASE-15053
> URL: https://issues.apache.org/jira/browse/HBASE-15053
> Project: HBase
>  Issue Type: Test
>Reporter: 胡托
>
> In Refrence guide,hbase.client.max.perregion.tasks is descripted  if there is 
> already hbase.client.max.perregion.tasks writes in progress for this region, 
> new puts won’t be sent to this region until some writes finishes.
> Whether can affect the read operation?I want to know,thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14903) Table Or Region?

2015-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

胡托 resolved HBASE-14903.

Resolution: Invalid

> Table Or Region?
> 
>
> Key: HBASE-14903
> URL: https://issues.apache.org/jira/browse/HBASE-14903
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: 胡托
>Assignee: 胡托
>Priority: Blocker
>
>  I've been reading on Latest Reference Guide and try to translated into 
> Chinese!
>  I think this sentence "When a table is in the process of splitting," 
> should be "When a Region is in the process of splitting," on chapter 【62.2. 
> hbase:meta】。
>  By the way,is this document the 
> latest?【http://hbase.apache.org/book.html#arch.overview】I will translate it!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15053) hbase.client.max.perregion.tasks can affect get or scan operation?

2015-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-15053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

胡托 resolved HBASE-15053.

Resolution: Invalid

> hbase.client.max.perregion.tasks  can  affect get or scan operation?
> 
>
> Key: HBASE-15053
> URL: https://issues.apache.org/jira/browse/HBASE-15053
> Project: HBase
>  Issue Type: Test
>Reporter: 胡托
>
> In Refrence guide,hbase.client.max.perregion.tasks is descripted  if there is 
> already hbase.client.max.perregion.tasks writes in progress for this region, 
> new puts won’t be sent to this region until some writes finishes.
> Whether can affect the read operation?I want to know,thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-20522) Maven artifacts for HBase 2 are not available at maven central

2018-05-02 Thread JIRA
Ismaël Mejía created HBASE-20522:


 Summary: Maven artifacts for HBase 2 are not available at maven 
central
 Key: HBASE-20522
 URL: https://issues.apache.org/jira/browse/HBASE-20522
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0
Reporter: Ismaël Mejía






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20522) Maven artifacts for HBase 2 are not available at maven central

2018-05-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-20522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismaël Mejía resolved HBASE-20522.
--
Resolution: Fixed

The deps are available, I confirm it is fixed now. Thanks [~stack]

> Maven artifacts for HBase 2 are not available at maven central
> --
>
> Key: HBASE-20522
> URL: https://issues.apache.org/jira/browse/HBASE-20522
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Ismaël Mejía
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20537) The download link is not available in the downloads webpage

2018-05-07 Thread JIRA
Ismaël Mejía created HBASE-20537:


 Summary: The download link is not available in the downloads 
webpage
 Key: HBASE-20537
 URL: https://issues.apache.org/jira/browse/HBASE-20537
 Project: HBase
  Issue Type: Bug
  Components: website
Affects Versions: 2.0.0
Reporter: Ismaël Mejía


When you open https://hbase.apache.org/downloads.html
There is not a direct link to download both the binary or source distributions.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-19700) assertion failure with client 1.4, cluster 1.2.3 and table with presplit

2018-01-03 Thread JIRA
Clément Guillaume created HBASE-19700:
-

 Summary: assertion failure with client 1.4, cluster 1.2.3 and 
table with presplit
 Key: HBASE-19700
 URL: https://issues.apache.org/jira/browse/HBASE-19700
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.4.0
Reporter: Clément Guillaume


A system assertion (which is active by default when running 
maven-failsafe-plugin) is failing when a 1.4 client is talking to a 1.2.3 
cluster for table with preslits. I believe the [1.4 client is meant to be 
compatible with a 1.2 
cluster|http://mail-archives.apache.org/mod_mbox/hbase-dev/201711.mbox/%3C5BAAC90F-31D8-4A5F-B9E4-BA61FF4CD40E%40gmail.com%3E]

{code}
@Test
public void test() throws IOException{
Configuration hbaseConfig = HBaseConfiguration.create();
hbaseConfig.set(HConstants.ZOOKEEPER_QUORUM, "hbase123.docker");
Connection connection = ConnectionFactory.createConnection(hbaseConfig);

TableName tableName = TableName.valueOf("AssertionTest");
Admin admin = connection.getAdmin();

if(!admin.tableExists(tableName)){
HTableDescriptor htable = new HTableDescriptor(tableName);
htable.addFamily(new HColumnDescriptor(new byte[]{(byte)'a'}));
byte[][] splitPoints = {{1, 2, 3, 4, 5, 6, 7}};
admin.createTable(htable, splitPoints);
System.out.println("table created");
}

Table table = connection.getTable(tableName);

ResultScanner scanner = table.getScanner(new Scan());
scanner.iterator().hasNext(); // Exception thrown here
}
{code}
{code}
java.lang.RuntimeException: java.lang.AssertionError
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:227)
at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:277)
at 
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:438)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:312)
at 
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:92)
at [...]
Caused by: java.lang.AssertionError
at 
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:484)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:312)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1324)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1221)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:356)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
... 30 more
{code}

[Email 
thread|http://mail-archives.apache.org/mod_mbox/hbase-user/201712.mbox/%3CCALte62z-0xxhQiefeRc_3xs-bhj1VZU%2BBtd47m-KfPZb02Tpcw%40mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-17180) Let HBase thrift2 support TThreadedSelectorServer

2016-11-27 Thread JIRA
易剑 created HBASE-17180:
--

 Summary: Let HBase thrift2 support TThreadedSelectorServer
 Key: HBASE-17180
 URL: https://issues.apache.org/jira/browse/HBASE-17180
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Affects Versions: 1.2.3
Reporter: 易剑
Priority: Minor
 Fix For: 1.2.3


Add TThreadedSelectorServer for HBase Thrift2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17181) Let HBase thrift2 support TThreadedSelectorServer

2016-11-27 Thread JIRA
易剑 created HBASE-17181:
--

 Summary: Let HBase thrift2 support TThreadedSelectorServer
 Key: HBASE-17181
 URL: https://issues.apache.org/jira/browse/HBASE-17181
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Affects Versions: 1.2.3
Reporter: 易剑
Priority: Minor
 Fix For: 1.2.3


Add TThreadedSelectorServer for HBase Thrift2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17181) Let HBase thrift2 support TThreadedSelectorServer

2016-11-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-17181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

易剑 updated HBASE-17181:
---
Attachment: HBASE-17181-V1.patch

Add TThreadedSelectorServer for HBase Thrift2

> Let HBase thrift2 support TThreadedSelectorServer
> -
>
> Key: HBASE-17181
> URL: https://issues.apache.org/jira/browse/HBASE-17181
> Project: HBase
>  Issue Type: New Feature
>  Components: Thrift
>Affects Versions: 1.2.3
>Reporter: 易剑
>Priority: Minor
>  Labels: features
> Fix For: 1.2.3
>
> Attachments: HBASE-17181-V1.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Add TThreadedSelectorServer for HBase Thrift2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17182) Memory leak from openScanner of HBase thrift2

2016-11-27 Thread JIRA
易剑 created HBASE-17182:
--

 Summary: Memory leak from openScanner of HBase thrift2
 Key: HBASE-17182
 URL: https://issues.apache.org/jira/browse/HBASE-17182
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: 易剑


Client call openScanner, but client (coredump or others) not closeScanner, the 
scanner will not be removed from scannerMap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17182) Memory leak from openScanner of HBase thrift2

2016-11-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-17182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701115#comment-15701115
 ] 

易剑 commented on HBASE-17182:


Client maybe not call closeScanner.
  
public int openScanner(ByteBuffer table, TScan scan) throws TIOError, 
TException {
Table htable = getTable(table);
ResultScanner resultScanner = null;
try {
  resultScanner = htable.getScanner(scanFromThrift(scan));
} catch (IOException e) {
  throw getTIOError(e);
} finally {
  closeTable(htable);
}
return addScanner(resultScanner);
  }

  private int addScanner(ResultScanner scanner) {
int id = nextScannerId.getAndIncrement();
scannerMap.put(id, scanner);
return id;
  }

  public void closeScanner(int scannerId) throws TIOError, TIllegalArgument, 
TException {
LOG.debug("scannerClose: id=" + scannerId);
ResultScanner scanner = getScanner(scannerId);
if (scanner == null) {
  String message = "scanner ID is invalid";
  LOG.warn(message);
  TIllegalArgument ex = new TIllegalArgument();
  ex.setMessage("Invalid scanner Id");
  throw ex;
}
scanner.close();
removeScanner(scannerId);
  }

> Memory leak from openScanner of HBase thrift2
> -
>
> Key: HBASE-17182
>     URL: https://issues.apache.org/jira/browse/HBASE-17182
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Reporter: 易剑
>
> Client call openScanner, but client (coredump or others) not closeScanner, 
> the scanner will not be removed from scannerMap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17182) Memory leak from openScanner of HBase thrift2

2016-11-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-17182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701122#comment-15701122
 ] 

易剑 commented on HBASE-17182:


A solution:
Scanner with a timestamp, and adding a thread to clean scanner regularly.

> Memory leak from openScanner of HBase thrift2
> -
>
> Key: HBASE-17182
> URL: https://issues.apache.org/jira/browse/HBASE-17182
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Reporter: 易剑
>
> Client call openScanner, but client (coredump or others) not closeScanner, 
> the scanner will not be removed from scannerMap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17182) Memory leak from openScanner of HBase thrift2

2016-11-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-17182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15703817#comment-15703817
 ] 

易剑 commented on HBASE-17182:


maybe use the same method with ConnectionCache: ChoreService and 
ScheduledChore/ScheduledThreadPoolExecutor

> Memory leak from openScanner of HBase thrift2
> -
>
> Key: HBASE-17182
> URL: https://issues.apache.org/jira/browse/HBASE-17182
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Reporter: 易剑
>
> Client call openScanner, but client (coredump or others) not closeScanner, 
> the scanner will not be removed from scannerMap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17702) Octal binary format still wrong in some hbase-shell commands

2017-02-27 Thread JIRA
Sylvain Veyrié created HBASE-17702:
--

 Summary: Octal binary format still wrong in some hbase-shell 
commands
 Key: HBASE-17702
 URL: https://issues.apache.org/jira/browse/HBASE-17702
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 1.1.7
 Environment: Debian 8.2
openjdk version "1.8.0_102"
Reporter: Sylvain Veyrié


As HBASE-1611 has stated, octal representation is still used sometimes.

Well, it's the case for {{status 'detailed'}} command from {{hbase-shell}}.

Problem is: bad octal formatting, as described in #HBASE-2035, is still applied.

For exemple:

If the hex representation of the start row (from {{scan 'hbase:meta'}}) of two 
regions is:
{noformat}
whatever\x91bbb
whatever\x98aaa
{noformat}

The octal representation (from {{status 'detailed'}}) is:
{noformat}
whatever\357bbb
whatever\357aaa
{noformat}

The latter being wrong.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-19037) BLOCKCACHE not work in console

2017-10-18 Thread JIRA
Александр created HBASE-19037:
-

 Summary: BLOCKCACHE not work in console
 Key: HBASE-19037
 URL: https://issues.apache.org/jira/browse/HBASE-19037
 Project: HBase
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.6
Reporter: Александр


I'm testing the speed. At the time of the request, I know part of the key.
```
scan 'id_bank', {STARTROW=>"24168557"+"\137",STOPROW=>"24168557"+"\177",COLUMNS 
=> ['high', 'low'], BLOCKCACHE => 'true'}
```
When I run the scan, the response returns a short time, and if I make a second 
request, the answer is already returned quickly, why?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-10092) Move up on to log4j2

2017-12-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297390#comment-16297390
 ] 

Mikael Ståldal commented on HBASE-10092:


{quote}SLF4J doc says it will only show if backend is logback. Otherwise, 
dropped. Good to have it though. We'll have to call out in release notes that 
FATAL no longer shows in our logs (usually){quote}

Are you referring to https://www.slf4j.org/faq.html#fatal or 
https://www.slf4j.org/faq.html#marker_interface ?

That's not correct, Log4j 2 also support markers.


> Move up on to log4j2
> 
>
> Key: HBASE-10092
> URL: https://issues.apache.org/jira/browse/HBASE-10092
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Balazs Meszaros
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 10092.txt, 10092v2.txt, HBASE-10092-preview-v0.patch, 
> HBASE-10092.master.001.patch, HBASE-10092.master.002.patch, 
> HBASE-10092.master.003.patch, HBASE-10092.patch
>
>
> Allows logging with less friction.  See http://logging.apache.org/log4j/2.x/  
> This rather radical transition can be done w/ minor change given they have an 
> adapter for apache's logging, the one we use.  They also have and adapter for 
> slf4j so we likely can remove at least some of the 4 versions of this module 
> our dependencies make use of.
> I made a start in attached patch but am currently stuck in maven dependency 
> resolve hell courtesy of our slf4j.  Fixing will take some concentration and 
> a good net connection, an item I currently lack.  Other TODOs are that will 
> need to fix our little log level setting jsp page -- will likely have to undo 
> our use of hadoop's tool here -- and the config system changes a little.
> I will return to this project soon.  Will bring numbers.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-10092) Move up on to log4j2

2017-12-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297413#comment-16297413
 ] 

Mikael Ståldal commented on HBASE-10092:


https://www.slf4j.org/faq.html#fatal is not correct, you can do that with Log4j 
2 as well.

You can use SLF4J as frontend with Log4j 2 as backend.

However, then you cannot use Log4j 1 configuration files. As far as I know, 
only Log4j 1 can read those configuration files.

> Move up on to log4j2
> 
>
> Key: HBASE-10092
> URL: https://issues.apache.org/jira/browse/HBASE-10092
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Balazs Meszaros
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 10092.txt, 10092v2.txt, HBASE-10092-preview-v0.patch, 
> HBASE-10092.master.001.patch, HBASE-10092.master.002.patch, 
> HBASE-10092.master.003.patch, HBASE-10092.patch
>
>
> Allows logging with less friction.  See http://logging.apache.org/log4j/2.x/  
> This rather radical transition can be done w/ minor change given they have an 
> adapter for apache's logging, the one we use.  They also have and adapter for 
> slf4j so we likely can remove at least some of the 4 versions of this module 
> our dependencies make use of.
> I made a start in attached patch but am currently stuck in maven dependency 
> resolve hell courtesy of our slf4j.  Fixing will take some concentration and 
> a good net connection, an item I currently lack.  Other TODOs are that will 
> need to fix our little log level setting jsp page -- will likely have to undo 
> our use of hadoop's tool here -- and the config system changes a little.
> I will return to this project soon.  Will bring numbers.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18462) HBase server fails to start when rootdir contains spaces

2017-07-27 Thread JIRA
Ismaël Mejía created HBASE-18462:


 Summary: HBase server fails to start when rootdir contains spaces
 Key: HBASE-18462
 URL: https://issues.apache.org/jira/browse/HBASE-18462
 Project: HBase
  Issue Type: Bug
  Components: hbase, test
Affects Versions: 1.2.6, 1.3.1
Reporter: Ismaël Mejía
Priority: Minor


As part of the tests for the HBase connector for Beam I discovered that when 
you start an HBase server instance from a directory that contains spaces 
(rootdir) it does not start correctly. This happens both with the 
HBaseTestingUtility server and with the binary distribution too.

The concrete exception says:
{quote}
Caused by: java.net.URISyntaxException: Illegal character in path at index 89: 
file:/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Java_JDK_Versions_Test/jdk/JDK
 1.7 
(latest)/label/beam/sdks/java/io/hbase/target/test-data/b11a0828-4628-4fe9-885d-073fb641ddc9
at java.net.URI$Parser.fail(URI.java:2829)
at java.net.URI$Parser.checkChars(URI.java:3002)
at java.net.URI$Parser.parseHierarchical(URI.java:3086)
at java.net.URI$Parser.parse(URI.java:3034)
at java.net.URI.(URI.java:595)
at java.net.URI.create(URI.java:857)
... 37 more
{quote}





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-10092) Move up on to log4j2

2017-07-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16103710#comment-16103710
 ] 

Mikael Ståldal commented on HBASE-10092:


Yes, we have not made much progress recently on supporting Log4j 1.x config 
files. We have partial support, but it may not be very useful in practice. Full 
support would require some more work, and none of the core developers have 
prioritized that recently.

We will soon make a new release (2.9) but it will most likely not contain this.


> Move up on to log4j2
> 
>
> Key: HBASE-10092
> URL: https://issues.apache.org/jira/browse/HBASE-10092
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Alex Newman
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 10092.txt, 10092v2.txt, HBASE-10092.patch, 
> HBASE-10092-preview-v0.patch
>
>
> Allows logging with less friction.  See http://logging.apache.org/log4j/2.x/  
> This rather radical transition can be done w/ minor change given they have an 
> adapter for apache's logging, the one we use.  They also have and adapter for 
> slf4j so we likely can remove at least some of the 4 versions of this module 
> our dependencies make use of.
> I made a start in attached patch but am currently stuck in maven dependency 
> resolve hell courtesy of our slf4j.  Fixing will take some concentration and 
> a good net connection, an item I currently lack.  Other TODOs are that will 
> need to fix our little log level setting jsp page -- will likely have to undo 
> our use of hadoop's tool here -- and the config system changes a little.
> I will return to this project soon.  Will bring numbers.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-10092) Move up on to log4j2

2017-07-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16103794#comment-16103794
 ] 

Mikael Ståldal commented on HBASE-10092:


That's the correct list.

> Move up on to log4j2
> 
>
> Key: HBASE-10092
> URL: https://issues.apache.org/jira/browse/HBASE-10092
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Alex Newman
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 10092.txt, 10092v2.txt, HBASE-10092.patch, 
> HBASE-10092-preview-v0.patch
>
>
> Allows logging with less friction.  See http://logging.apache.org/log4j/2.x/  
> This rather radical transition can be done w/ minor change given they have an 
> adapter for apache's logging, the one we use.  They also have and adapter for 
> slf4j so we likely can remove at least some of the 4 versions of this module 
> our dependencies make use of.
> I made a start in attached patch but am currently stuck in maven dependency 
> resolve hell courtesy of our slf4j.  Fixing will take some concentration and 
> a good net connection, an item I currently lack.  Other TODOs are that will 
> need to fix our little log level setting jsp page -- will likely have to undo 
> our use of hadoop's tool here -- and the config system changes a little.
> I will return to this project soon.  Will bring numbers.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18462) HBase server fails to start when rootdir contains spaces

2017-07-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-18462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16103850#comment-16103850
 ] 

Ismaël Mejía commented on HBASE-18462:
--

You can find the full stacktrace in the linked issue (BEAM-2543). I have the 
impression that the error is in HBase, I tried just changing the hbase.rootdir 
variable during debugging to make encode use the URI instead of the path (so 
space became a valid %20) and it worked, however I didn't go farther because I 
realized that the whole Path <-> URI use was more tricky than what I expected.
And about the directory with spaces, I understand that this is not a 
recommended practice but I don't see why this should break a server considering 
that this is not a weird forbidden character in most filesystems.

> HBase server fails to start when rootdir contains spaces
> 
>
> Key: HBASE-18462
> URL: https://issues.apache.org/jira/browse/HBASE-18462
> Project: HBase
>  Issue Type: Bug
>  Components: hbase, test
>Affects Versions: 1.3.1, 1.2.6
>Reporter: Ismaël Mejía
>Priority: Minor
>
> As part of the tests for the HBase connector for Beam I discovered that when 
> you start an HBase server instance from a directory that contains spaces 
> (rootdir) it does not start correctly. This happens both with the 
> HBaseTestingUtility server and with the binary distribution too.
> The concrete exception says:
> {quote}
> Caused by: java.net.URISyntaxException: Illegal character in path at index 
> 89: 
> file:/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Java_JDK_Versions_Test/jdk/JDK
>  1.7 
> (latest)/label/beam/sdks/java/io/hbase/target/test-data/b11a0828-4628-4fe9-885d-073fb641ddc9
>   at java.net.URI$Parser.fail(URI.java:2829)
>   at java.net.URI$Parser.checkChars(URI.java:3002)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3086)
>   at java.net.URI$Parser.parse(URI.java:3034)
>   at java.net.URI.(URI.java:595)
>   at java.net.URI.create(URI.java:857)
>   ... 37 more
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18462) HBase server fails to start when rootdir contains spaces

2017-08-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-18462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16114081#comment-16114081
 ] 

Ismaël Mejía commented on HBASE-18462:
--

Oh I am really impressed that HBase uses Hadoop FileSystem even for a local 
filesystem (notice that this tests does not use a MiniCluster based HDFS). I 
agree that an end user should never care about putting this %20s and your 
argument makes sense. I just mentioned this because that was my way to 'hack' 
it during the debug session, however I didn't continue because I was confused 
about the use of Paths instead of URIs in HBase but I see that the motivation 
comes also from this encoding issue.
I don't see yet the logic of why i can use a directory with spaces in S3 or 
even on HDFS but not in a local filesystem, something is not ok, this should be 
consistent, no?

> HBase server fails to start when rootdir contains spaces
> 
>
> Key: HBASE-18462
> URL: https://issues.apache.org/jira/browse/HBASE-18462
> Project: HBase
>  Issue Type: Bug
>  Components: hbase, test
>Affects Versions: 1.3.1, 1.2.6
>Reporter: Ismaël Mejía
>Priority: Minor
>
> As part of the tests for the HBase connector for Beam I discovered that when 
> you start an HBase server instance from a directory that contains spaces 
> (rootdir) it does not start correctly. This happens both with the 
> HBaseTestingUtility server and with the binary distribution too.
> The concrete exception says:
> {quote}
> Caused by: java.net.URISyntaxException: Illegal character in path at index 
> 89: 
> file:/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Java_JDK_Versions_Test/jdk/JDK
>  1.7 
> (latest)/label/beam/sdks/java/io/hbase/target/test-data/b11a0828-4628-4fe9-885d-073fb641ddc9
>   at java.net.URI$Parser.fail(URI.java:2829)
>   at java.net.URI$Parser.checkChars(URI.java:3002)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3086)
>   at java.net.URI$Parser.parse(URI.java:3034)
>   at java.net.URI.(URI.java:595)
>   at java.net.URI.create(URI.java:857)
>   ... 37 more
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-16869) Typo in "Disabling Blockcache" doc

2016-10-18 Thread JIRA
Clément Guillaume created HBASE-16869:
-

 Summary: Typo in "Disabling Blockcache" doc
 Key: HBASE-16869
 URL: https://issues.apache.org/jira/browse/HBASE-16869
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Clément Guillaume
Priority: Trivial


The [Disabling 
Blockcache|https://hbase.apache.org/book.html#disabling.blockcache] section of 
the documentation refers to {{hbase.block.cache.size}}. Should it be 
{{hfile.block.cache.size}}?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13830) Hbase REVERSED may throw Exception sometimes

2016-11-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15643754#comment-15643754
 ] 

Nils Larsgård commented on HBASE-13830:
---

I still run into this issue, using hbase-client 1.2.3

> Hbase REVERSED may throw Exception sometimes
> 
>
> Key: HBASE-13830
> URL: https://issues.apache.org/jira/browse/HBASE-13830
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: ryan.jin
>
> run a scan at hbase shell command.
> {code}
> scan 
> 'analytics_access',{ENDROW=>'9223370603647713262-flume01.hadoop-10.32.117.111-373563509',LIMIT=>10,REVERSED=>true}
> {code}
> will throw exception
> {code}
> java.io.IOException: java.io.IOException: Could not seekToPreviousRow 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://nameservice1/hbase/data/default/analytics_access/a54c47c568c00dd07f9d92cfab1accc7/cf/2e3a107e9fec4930859e992b61fb22f6,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=9223370603542781142-flume01.hadoop-10.32.117.111-378180911/cf:key/1433311994702/Put,
>  
> lastKey=9223370603715515112-flume01.hadoop-10.32.117.111-370923552/cf:timestamp/1433139261951/Put,
>  avgKeyLen=80, avgValueLen=115, entries=43544340, length=1409247455, 
> cur=9223370603647710245-flume01.hadoop-10.32.117.111-373563545/cf:payload/1433207065597/Put/vlen=644/mvcc=0]
>  to key 
> 9223370603647710245-flume01.hadoop-10.32.117.111-373563545/cf:payload/1433207065597/Put/vlen=644/mvcc=0
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:448)
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.seekToPreviousRow(ReversedKeyValueHeap.java:88)
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.seekToPreviousRow(ReversedStoreScanner.java:128)
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.seekToNextRow(ReversedStoreScanner.java:88)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:503)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3866)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3946)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3814)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3805)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3136)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: On-disk size without header provided is 
> 47701, but block header contains 10134. Block offset: -1, data starts with: 
> DATABLK*\x00\x00'\x96\x00\x01\x00\x04\x00\x00\x00\x005\x96^\xD2\x01\x00\x00@\x00\x00\x00'
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:451)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.access$400(HFileBlock.java:87)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1466)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1314)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:355)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:569)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:413)
>   ... 17 more
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
>

[jira] [Comment Edited] (HBASE-13830) Hbase REVERSED may throw Exception sometimes

2016-11-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15643754#comment-15643754
 ] 

Nils Larsgård edited comment on HBASE-13830 at 11/7/16 10:25 AM:
-

I still run into this issue, using hbase-client 1.2.3

{code}
Caused by: java.io.IOException: On-disk size without header provided is 142128, 
but block header contains 10929. Block offset: -1, data starts with: 
DATABLK*\x00\x00*\xB1\x00\x01\x00\x1A\x00\x00\x00\x00\x02\x0E"w\x01\x00\x00@\x00\x00\x00*
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
... 13 more

at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1267) 
~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
 ~[hbase-protocol-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219) 
~[hbase-client-1.2.3.jar:1.2.3]
{code}


was (Author: nilsmag...@gmail.com):
I still run into this issue, using hbase-client 1.2.3

> Hbase REVERSED may throw Exception sometimes
> 
>
> Key: HBASE-13830
> URL: https://issues.apache.org/jira/browse/HBASE-13830
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: ryan.jin
>
> run a scan at hbase shell command.
> {code}
> scan 
> 'analytics_access',{ENDROW=>'9223370603647713262-flume01.hadoop-10.32.117.111-373563509',LIMIT=>10,REVERSED=>true}
> {code}
> will throw exception
> {code}
> java.io.IOException: java.io.IOException: Could not seekToPreviousRow 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://nameservice1/hbase/data/default/analytics_access/a54c47c568c00dd07f9d92cfab1accc7/cf/2e3a107e9fec4930859e992b61fb22f6,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=9223370603542781142-flume01.hadoop-10.32.117.111-378180911/cf:key/1433311994702/Put,
>  
> lastKey=9223370603715515112-flume01.hadoop-10.32.117.111-370923552/cf:timestamp/1433139261951/Put,
>  avgKeyLen=80, avgValueLen=115, entries=43544340, length=1409247455, 
> cur=9223370603647710245-flume01.hadoop-10.32.117.111-373563545/cf:payload/1433207065597/Put/vlen=644/mvcc=0]
>  to key 
> 9223370603647710245-flume01.hadoop-10.32.117.111-373563545/cf:payload/1433207065597/Put/vlen=644/mvcc=0
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:448)
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.seekToPreviousRow(ReversedKeyValueHeap.java:88)
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.seekToPreviousRow(ReversedStoreScanner.java:128)
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.seekToNextRow(ReversedStoreScanner.java:88)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:503)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3866)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3946)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3814)
>   at 
> org.apa

[jira] [Comment Edited] (HBASE-13830) Hbase REVERSED may throw Exception sometimes

2016-11-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15643754#comment-15643754
 ] 

Nils Larsgård edited comment on HBASE-13830 at 11/7/16 10:26 AM:
-

I still run into this issue, using hbase-client 1.2.3

{code}
Caused by: java.io.IOException: On-disk size without header provided is 142128, 
but block header contains 10929. Block offset: -1, data starts with: 
DATABLK*\x00\x00*\xB1\x00\x01\x00\x1A\x00\x00\x00\x00\x02\x0E"w\x01\x00\x00@\x00\x00\x00*
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
... 13 more

at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1267) 
~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
 ~[hbase-protocol-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219) 
~[hbase-client-1.2.3.jar:1.2.3]
{code}

I get this on large results, when repeatedly calling next(n) on the 
result-scanner to fetch more values. 


was (Author: nilsmag...@gmail.com):
I still run into this issue, using hbase-client 1.2.3

{code}
Caused by: java.io.IOException: On-disk size without header provided is 142128, 
but block header contains 10929. Block offset: -1, data starts with: 
DATABLK*\x00\x00*\xB1\x00\x01\x00\x1A\x00\x00\x00\x00\x02\x0E"w\x01\x00\x00@\x00\x00\x00*
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
... 13 more

at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1267) 
~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
 ~[hbase-protocol-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219) 
~[hbase-client-1.2.3.jar:1.2.3]
{code}

> Hbase REVERSED may throw Exception sometimes
> 
>
> Key: HBASE-13830
> URL: https://issues.apache.org/jira/browse/HBASE-13830
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: ryan.jin
>
> run a scan at hbase shell command.
> {code}
> scan 
> 'analytics_access',{ENDROW=>'9223370603647713262-flume01.hadoop-10.32.117.111-373563509',LIMIT=>10,REVERSED=>true}
> {code}
> will throw exception
> {code}
> java.io.IOException: java.io.IOException: Could not seekToPreviousRow 
> StoreFileScanner[HFileScanner for reader 

[jira] [Comment Edited] (HBASE-13830) Hbase REVERSED may throw Exception sometimes

2016-11-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15643754#comment-15643754
 ] 

Nils Larsgård edited comment on HBASE-13830 at 11/7/16 10:26 AM:
-

I still run into this issue, using hbase-client 1.2.3

{code}
Caused by: java.io.IOException: On-disk size without header provided is 142128, 
but block header contains 10929. Block offset: -1, data starts with: 
DATABLK*\x00\x00*\xB1\x00\x01\x00\x1A\x00\x00\x00\x00\x02\x0E"w\x01\x00\x00@\x00\x00\x00*
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
... 13 more

at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1267) 
~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
 ~[hbase-protocol-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219) 
~[hbase-client-1.2.3.jar:1.2.3]
{code}

I get this on large results, when repeatedly calling next( n ) on the 
result-scanner to fetch more values. 


was (Author: nilsmag...@gmail.com):
I still run into this issue, using hbase-client 1.2.3

{code}
Caused by: java.io.IOException: On-disk size without header provided is 142128, 
but block header contains 10929. Block offset: -1, data starts with: 
DATABLK*\x00\x00*\xB1\x00\x01\x00\x1A\x00\x00\x00\x00\x02\x0E"w\x01\x00\x00@\x00\x00\x00*
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
... 13 more

at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1267) 
~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
 ~[hbase-protocol-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219) 
~[hbase-client-1.2.3.jar:1.2.3]
{code}

I get this on large results, when repeatedly calling next(n) on the 
result-scanner to fetch more values. 

> Hbase REVERSED may throw Exception sometimes
> 
>
> Key: HBASE-13830
> URL: https://issues.apache.org/jira/browse/HBASE-13830
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: ryan.jin
>
> run a scan at hbase shell command.
> {code}
> scan 
> 'analytics_access',{ENDROW=>'9223370603647713262-flume01.hadoop-10.32.117.111-373563509',LIMIT=>10,REVERSED=>true}
> {code}
> will throw exception
> {code}
> java.io.IOException: java.io.IO

[jira] [Comment Edited] (HBASE-13830) Hbase REVERSED may throw Exception sometimes

2016-11-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15643754#comment-15643754
 ] 

Nils Larsgård edited comment on HBASE-13830 at 11/7/16 3:17 PM:


I still run into this issue, using hbase-client 1.2.3

{code}
Caused by: java.io.IOException: On-disk size without header provided is 142128, 
but block header contains 10929. Block offset: -1, data starts with: 
DATABLK*\x00\x00*\xB1\x00\x01\x00\x1A\x00\x00\x00\x00\x02\x0E"w\x01\x00\x00@\x00\x00\x00*
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
... 13 more

at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1267) 
~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
 ~[hbase-protocol-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219) 
~[hbase-client-1.2.3.jar:1.2.3]
{code}

I get this on large results, when repeatedly calling next( n ) on the 
result-scanner to fetch more values. 

Edit: ignore this, the hbase-version I used when getting this was outdated.


was (Author: nilsmag...@gmail.com):
I still run into this issue, using hbase-client 1.2.3

{code}
Caused by: java.io.IOException: On-disk size without header provided is 142128, 
but block header contains 10929. Block offset: -1, data starts with: 
DATABLK*\x00\x00*\xB1\x00\x01\x00\x1A\x00\x00\x00\x00\x02\x0E"w\x01\x00\x00@\x00\x00\x00*
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
... 13 more

at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1267) 
~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
 ~[hbase-client-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
 ~[hbase-protocol-1.2.3.jar:1.2.3]
at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219) 
~[hbase-client-1.2.3.jar:1.2.3]
{code}

I get this on large results, when repeatedly calling next( n ) on the 
result-scanner to fetch more values. 

> Hbase REVERSED may throw Exception sometimes
> 
>
> Key: HBASE-13830
> URL: https://issues.apache.org/jira/browse/HBASE-13830
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: ryan.jin
>
> run a scan at hbase shell command.
> {code}
> scan 
> 'analytics_access',{ENDROW=>'9223370603647713262-flume01.hadoop-10.32.117.111-373563509',LIMIT=>10,REVERSED=>true}
> {cod

[jira] [Created] (HBASE-9690) provide md5 and sha1 checksums for tarballs

2013-10-01 Thread JIRA
André Kelpe created HBASE-9690:
--

 Summary: provide md5 and sha1 checksums for tarballs
 Key: HBASE-9690
 URL: https://issues.apache.org/jira/browse/HBASE-9690
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.94.12
Reporter: André Kelpe


Currently there is no way to verify the integrity of release tarballs. Please 
provide md5, sha1 and gpg signatures, so that users can be sure, that they have 
the correct tarball.

Many open source projects provide the above and hbase would benefit from that 
as well.

This looks like the right solution for the problem:

http://nicoulaj.github.io/checksum-maven-plugin/examples/generating-project-artifacts-checksums.html



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9690) provide md5 and sha1 checksums for tarballs

2013-10-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-9690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13782954#comment-13782954
 ] 

André Kelpe commented on HBASE-9690:


It seems, that the mirrors are lacking those files:

Examples:
http://mirror.serversupportforum.de/apache/hbase/hbase-0.94.12/
http://ftp.halifax.rwth-aachen.de/apache/hbase/hbase-0.94.12/
http://apache.mirror.iphh.net/hbase/hbase-0.94.12/

> provide md5 and sha1 checksums for tarballs
> ---
>
> Key: HBASE-9690
> URL: https://issues.apache.org/jira/browse/HBASE-9690
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.12
>Reporter: André Kelpe
>
> Currently there is no way to verify the integrity of release tarballs. Please 
> provide md5, sha1 and gpg signatures, so that users can be sure, that they 
> have the correct tarball.
> Many open source projects provide the above and hbase would benefit from that 
> as well.
> This looks like the right solution for the problem:
> http://nicoulaj.github.io/checksum-maven-plugin/examples/generating-project-artifacts-checksums.html



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9690) HBase Mirrors are missing md5 and sha1 checksums for tarballs

2013-10-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-9690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788055#comment-13788055
 ] 

André Kelpe commented on HBASE-9690:


any news on this?

> HBase Mirrors are missing md5 and sha1 checksums for tarballs
> -
>
> Key: HBASE-9690
> URL: https://issues.apache.org/jira/browse/HBASE-9690
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.12
>Reporter: André Kelpe
>
> Currently there is no way to verify the integrity of release tarballs. Please 
> provide md5, sha1 and gpg signatures, so that users can be sure, that they 
> have the correct tarball.
> Many open source projects provide the above and hbase would benefit from that 
> as well.
> This looks like the right solution for the problem:
> http://nicoulaj.github.io/checksum-maven-plugin/examples/generating-project-artifacts-checksums.html



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA
刘泓 created HBASE-9913:
-

 Summary: weblogic deployment project implementation under the 
mapreduce hbase reported a NullPointer error 
 Key: HBASE-9913
 URL: https://issues.apache.org/jira/browse/HBASE-9913
 Project: HBase
  Issue Type: Bug
  Components: hadoop2
Affects Versions: 0.94.10
 Environment: weblogic windows
Reporter: 刘泓


java.lang.NullPointerException
at java.io.File.(File.java:222)
at java.util.zip.ZipFile.(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 
weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
at 
weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
> 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Fix Version/s: 0.94.10
   Status: Patch Available  (was: Open)

> weblogic deployment project implementation under the mapreduce hbase reported 
> a NullPointer error 
> --
>
> Key: HBASE-9913
> URL: https://issues.apache.org/jira/browse/HBASE-9913
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2
>Affects Versions: 0.94.10
> Environment: weblogic windows
>Reporter: 刘泓
>  Labels: patch
> Fix For: 0.94.10
>
>
> java.lang.NullPointerException
>   at java.io.File.(File.java:222)
>   at java.util.zip.ZipFile.(ZipFile.java:75)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
>   at 
> com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
>   at 
> com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
>   at 
> weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
>   at 
> weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
>   at 
> weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
>   at 
> weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
>   at 
> weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
>   at 
> weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
>   at 
> weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
>   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
>   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
> > 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Status: Open  (was: Patch Available)

> weblogic deployment project implementation under the mapreduce hbase reported 
> a NullPointer error 
> --
>
> Key: HBASE-9913
> URL: https://issues.apache.org/jira/browse/HBASE-9913
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2
>Affects Versions: 0.94.10
> Environment: weblogic windows
>Reporter: 刘泓
>  Labels: patch
> Fix For: 0.94.10
>
>
> java.lang.NullPointerException
>   at java.io.File.(File.java:222)
>   at java.util.zip.ZipFile.(ZipFile.java:75)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
>   at 
> com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
>   at 
> com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
>   at 
> weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
>   at 
> weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
>   at 
> weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
>   at 
> weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
>   at 
> weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
>   at 
> weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
>   at 
> weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
>   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
>   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
> > 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Description: 
java.lang.NullPointerException
at java.io.File.(File.java:222)
at java.util.zip.ZipFile.(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 
weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
at 
weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

> 

By tomcat and weblogic respectively breakpoint trace the source code hbase,we 
found TableMapReduceUtil.findOrCreateJar returned string is null.Under tomcat 
jar package getProtocol return to jar type, and return to the zip type under 
weblogic,so we join judgment zip type

  was:
java.lang.NullPointerException
at java.io.File.(File.java:222)
at java.util.zip.ZipFile.(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 
weblogic.servlet.internal.WebAppServletContext.exec

[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Status: Patch Available  (was: Open)

> weblogic deployment project implementation under the mapreduce hbase reported 
> a NullPointer error 
> --
>
> Key: HBASE-9913
> URL: https://issues.apache.org/jira/browse/HBASE-9913
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2
>Affects Versions: 0.94.10
> Environment: weblogic windows
>Reporter: 刘泓
>  Labels: patch
> Fix For: 0.94.10
>
> Attachments: TableMapReduceUtil.class, TableMapReduceUtil.java
>
>
> java.lang.NullPointerException
>   at java.io.File.(File.java:222)
>   at java.util.zip.ZipFile.(ZipFile.java:75)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
>   at 
> com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
>   at 
> com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
>   at 
> weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
>   at 
> weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
>   at 
> weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
>   at 
> weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
>   at 
> weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
>   at 
> weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
>   at 
> weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
>   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
>   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
> > 
> By tomcat and weblogic respectively breakpoint trace the source code hbase,we 
> found TableMapReduceUtil.findOrCreateJar returned string is null.Under tomcat 
> jar package getProtocol return to jar type, and return to the zip type under 
> weblogic,so we join judgment zip type



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> weblogic deployment project implementation under the mapreduce hbase reported 
> a NullPointer error 
> --
>
> Key: HBASE-9913
> URL: https://issues.apache.org/jira/browse/HBASE-9913
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2
>Affects Versions: 0.94.10
> Environment: weblogic windows
>Reporter: 刘泓
>  Labels: patch
> Fix For: 0.94.10
>
> Attachments: TableMapReduceUtil.class, TableMapReduceUtil.java
>
>
> java.lang.NullPointerException
>   at java.io.File.(File.java:222)
>   at java.util.zip.ZipFile.(ZipFile.java:75)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
>   at 
> com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
>   at 
> com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
>   at 
> weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
>   at 
> weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
>   at 
> weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
>   at 
> weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
>   at 
> weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
>   at 
> weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
>   at 
> weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
>   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
>   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
> > 
> By tomcat and weblogic respectively breakpoint trace the source code hbase,we 
> found TableMapReduceUtil.findOrCreateJar returned string is null.Under tomcat 
> jar package getProtocol return to jar type, and return to the zip type under 
> weblogic,so we join judgment zip type



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Attachment: TableMapReduceUtil.class
TableMapReduceUtil.java

here is java and class file,so you should replace the class file in your 
hase-xx.jar

> weblogic deployment project implementation under the mapreduce hbase reported 
> a NullPointer error 
> --
>
> Key: HBASE-9913
> URL: https://issues.apache.org/jira/browse/HBASE-9913
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2
>Affects Versions: 0.94.10
> Environment: weblogic windows
>Reporter: 刘泓
>  Labels: patch
> Fix For: 0.94.10
>
> Attachments: TableMapReduceUtil.class, TableMapReduceUtil.java
>
>
> java.lang.NullPointerException
>   at java.io.File.(File.java:222)
>   at java.util.zip.ZipFile.(ZipFile.java:75)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
>   at 
> com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
>   at 
> com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
>   at 
> weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
>   at 
> weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
>   at 
> weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
>   at 
> weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
>   at 
> weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
>   at 
> weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
>   at 
> weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
>   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
>   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
> > 
> By tomcat and weblogic respectively breakpoint trace the source code hbase,we 
> found TableMapReduceUtil.findOrCreateJar returned string is null.Under tomcat 
> jar package getProtocol return to jar type, and return to the zip type under 
> weblogic,so we join judgment zip type



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Description: 
java.lang.NullPointerException
at java.io.File.(File.java:222)
at java.util.zip.ZipFile.(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 
weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
at 
weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

> 

by respective  the hbase source code under weblogic,we found 
TableMapReduceUtil.findOrCreateJar returned string is null.because under tomcat 
jar package getProtocol return to jar type, and return to the zip type under 
weblogic,so we join judgment zip type

  was:
java.lang.NullPointerException
at java.io.File.(File.java:222)
at java.util.zip.ZipFile.(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 
weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2

[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Description: 
java.lang.NullPointerException
at java.io.File.(File.java:222)
at java.util.zip.ZipFile.(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 
weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
at 
weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

> 

by respective  the hbase source code under weblogic,we found 
TableMapReduceUtil.findOrCreateJar returned string is null.because under tomcat 
,jar file's protocol is jar type, but under weblogic ,jar file's protocol is 
zip type,and the findOrCreateJar method cann't resolve zip type, so we should 
join zip type

  was:
java.lang.NullPointerException
at java.io.File.(File.java:222)
at java.util.zip.ZipFile.(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.jav

[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointer error

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Description: 
java.lang.NullPointerException
at java.io.File.(File.java:222)
at java.util.zip.ZipFile.(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 
weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
at 
weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

> 

by respective  the hbase source code under weblogic,we found 
TableMapReduceUtil.findOrCreateJar returned string is null.because under tomcat 
,jar file's protocol is jar type, but under weblogic ,jar file's protocol is 
zip type,and the findOrCreateJar method cann't resolve zip type, so we should 
join zip type judgement

  was:
java.lang.NullPointerException
at java.io.File.(File.java:222)
at java.util.zip.ZipFile.(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.jav

[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointerException

2013-11-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Summary: weblogic deployment project implementation under the mapreduce 
hbase reported a NullPointerException  (was: weblogic deployment project 
implementation under the mapreduce hbase reported a NullPointer error )

> weblogic deployment project implementation under the mapreduce hbase reported 
> a NullPointerException
> 
>
> Key: HBASE-9913
> URL: https://issues.apache.org/jira/browse/HBASE-9913
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2
>Affects Versions: 0.94.10
> Environment: weblogic windows
>Reporter: 刘泓
>  Labels: patch
> Fix For: 0.94.10
>
> Attachments: TableMapReduceUtil.class, TableMapReduceUtil.java
>
>
> java.lang.NullPointerException
>   at java.io.File.(File.java:222)
>   at java.util.zip.ZipFile.(ZipFile.java:75)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
>   at 
> com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
>   at 
> com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
>   at 
> weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
>   at 
> weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
>   at 
> weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
>   at 
> weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
>   at 
> weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
>   at 
> weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
>   at 
> weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
>   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
>   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
> > 
> by respective  the hbase source code under weblogic,we found 
> TableMapReduceUtil.findOrCreateJar returned string is null.because under 
> tomcat ,jar file's protocol is jar type, but under weblogic ,jar file's 
> protocol is zip type,and the findOrCreateJar method cann't resolve zip type, 
> so we should join zip type judgement



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointerException

2013-11-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Description: 
java.lang.NullPointerException
at java.io.File.(File.java:222)
at java.util.zip.ZipFile.(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
at 
weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
at 
weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

> 

my project deploy under weblogic11,and when i run hbase mapreduce,it throws a 
NullPointerException.i found the method TableMapReduceUtil.findContainingJar() 
returns null,so i debug it, url.getProtocol() return "zip",but the file is a 
jar file,so the if condition:
 if ("jar".equals(url.getProtocol()))  cann't run. so i add a if condition to 
judge "zip" type


  was:
java.lang.NullPointerException
at java.io.File.(File.java:222)
at java.util.zip.ZipFile.(ZipFile.java:75)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
at 
com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
at 
com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at 
weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at 
weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at 
weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
at 
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at 
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at 
weblogic.servlet.internal.WebAppServletContext.secured

[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointerException

2013-11-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

刘泓 updated HBASE-9913:
--

Status: Patch Available  (was: Reopened)

> weblogic deployment project implementation under the mapreduce hbase reported 
> a NullPointerException
> 
>
> Key: HBASE-9913
> URL: https://issues.apache.org/jira/browse/HBASE-9913
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2, mapreduce
>Affects Versions: 0.94.10
> Environment: weblogic windows
>Reporter: 刘泓
> Attachments: TableMapReduceUtil.class, TableMapReduceUtil.java
>
>
> java.lang.NullPointerException
>   at java.io.File.(File.java:222)
>   at java.util.zip.ZipFile.(ZipFile.java:75)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87)
>   at 
> com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163)
>   at 
> com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
>   at 
> weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
>   at 
> weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
>   at 
> weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
>   at 
> weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
>   at 
> weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
>   at 
> weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
>   at 
> weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
>   at 
> weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
>   at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
>   at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
> > 
> my project deploy under weblogic11,and when i run hbase mapreduce,it throws a 
> NullPointerException.i found the method 
> TableMapReduceUtil.findContainingJar() returns null,so i debug it, 
> url.getProtocol() return "zip",but the file is a jar file,so the if condition:
>  if ("jar".equals(url.getProtocol()))  cann't run. so i add a if condition to 
> judge "zip" type



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-8793) Hbase

2013-06-24 Thread JIRA
Michael Czerwiński created HBASE-8793:
-

 Summary: Hbase 
 Key: HBASE-8793
 URL: https://issues.apache.org/jira/browse/HBASE-8793
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.94.6
 Environment: Description:Ubuntu 12.04.2 LTS
Hbase: 0.94.6+96-1.cdh4.3.0.p0.13~precise-cdh4.3.0
Reporter: Michael Czerwiński
Priority: Minor


hbase-regionserver startup script always returns 0 (exit 0 at the end of the 
script) this is wrong behaviour which causes issues when trying to recognise 
true status of the service.
Replacing it with 'exit $?' seems to fix the problem, looking at hbase master 
return codes are assigned to RETVAL variable which is used with exit.

Not sure if the problem exist in other versions.

> /etc/init.d/hbase-regionserver.orig status
hbase-regionserver is not running.
> echo $?

After fix:

> /etc/init.d/hbase-regionserver status
hbase-regionserver is not running.
> echo $?
1


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8793) Regionserver ubuntu's startup script return code always 0

2013-06-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Czerwiński updated HBASE-8793:
--

Summary: Regionserver ubuntu's startup script return code always 0  (was: 
Hbase )

> Regionserver ubuntu's startup script return code always 0
> -
>
> Key: HBASE-8793
> URL: https://issues.apache.org/jira/browse/HBASE-8793
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.94.6
> Environment: Description:Ubuntu 12.04.2 LTS
> Hbase: 0.94.6+96-1.cdh4.3.0.p0.13~precise-cdh4.3.0
>Reporter: Michael Czerwiński
>Priority: Minor
>
> hbase-regionserver startup script always returns 0 (exit 0 at the end of the 
> script) this is wrong behaviour which causes issues when trying to recognise 
> true status of the service.
> Replacing it with 'exit $?' seems to fix the problem, looking at hbase master 
> return codes are assigned to RETVAL variable which is used with exit.
> Not sure if the problem exist in other versions.
> > /etc/init.d/hbase-regionserver.orig status
> hbase-regionserver is not running.
> > echo $?
> After fix:
> > /etc/init.d/hbase-regionserver status
> hbase-regionserver is not running.
> > echo $?
> 1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8793) Regionserver ubuntu's startup script return code always 0

2013-06-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13691960#comment-13691960
 ] 

Michael Czerwiński commented on HBASE-8793:
---

Well I think that the whole script is different than the on you are using (see 
below).
The problem is that when checking status, status() function only "returns" and 
probably should call exit with a return code. Because return is not handled in 
any way "exit 0" (last line) takes place indicating invalid service status.
The package comes from Cloudera's CDH4.

--- CUT HERE ---

#! /bin/bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# This file is used to run multiple instances of certain HBase daemons using 
init scripts.
# It replaces the local-regionserver.sh and local-master.sh scripts for Bigtop 
packages.
# By default, this script runs a single daemon normally. If offsets are 
provided, additional
# daemons are run, identified by the offset in log and pid files, and listening 
on the default
# port + the offset. Offsets can be provided as arguments when invoking init 
scripts directly:
#
# /etc/init.d/hbase-regionserver start 1 2 3 4
#
# or you can list the offsets to run in /etc/init.d/regionserver_offsets:
#
#echo "regionserver_OFFSETS='1 2 3 4' >> /etc/default/hbase"
#sudo service hbase-$HBASE_DAEMON@ start
#
# Offsets specified on the command-line always override the offsets file. If no 
offsets are
# specified on the command-line when stopping or restarting daemons, all 
running instances of the
# daemon are stopped (regardless of the contents of the offsets file).

# chkconfig: 2345 87 13
# description: Summary: HBase is the Hadoop database. Use it when you need 
random, realtime read/write access to your Big Data. This project's goal is the 
hosting of very large tables -- billions of rows X millions of columns -- atop 
clusters of commodity hardware.
# processname: HBase
#
### BEGIN INIT INFO
# Provides:  hbase-regionserver
# Required-Start:$network $local_fs $remote_fs
# Required-Stop: $remote_fs
# Should-Start:  $named
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop:  0 1 6
# Short-Description: Hadoop HBase regionserver daemon
### END INIT INFO

. /etc/default/hadoop
. /etc/default/hbase

# Autodetect JAVA_HOME if not defined
. /usr/lib/bigtop-utils/bigtop-detect-javahome

# Our default HBASE_HOME, HBASE_PID_DIR and HBASE_CONF_DIR
export HBASE_HOME=${HBASE_HOME:-/usr/lib/hbase}
export HBASE_PID_DIR=${HBASE_PID_DIR:-/var/run/hbase}
export HBASE_LOG_DIR=${HBASE_LOG_DIR:-/var/log/hbase}

install -d -m 0755 -o hbase -g hbase ${HBASE_PID_DIR}

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON_SCRIPT=$HBASE_HOME/bin/hbase-daemon.sh
NAME=hbase-regionserver
DESC="Hadoop HBase regionserver daemon"
PID_FILE=$HBASE_PID_DIR/hbase-hbase-regionserver.pid
CONF_DIR=/etc/hbase/conf

DODTIME=3   # Time to wait for the server to die, in seconds
# If this value is set too low you might not
# let some servers to die gracefully and
# 'restart' will not work

UPPERCASE_HBASE_DAEMON=$(echo regionserver | tr '[:lower:]' '[:upper:]')

ALL_DAEMONS_RUNNING=0
NO_DAEMONS_RUNNING=1
SOME_OFFSET_DAEMONS_FAILING=2
INVALID_OFFSETS_PROVIDED=3

# These limits are not easily configurable - they are enforced by HBase
if [ "regionserver" == "master" ] ; then
FIRST_PORT=6
FIRST_INFO_PORT=60010
OFFSET_LIMIT=10
elif [ "regionserver" == "regionserver" ] ; then
FIRST_PORT=60200
FIRST_INFO_PORT=60300
OFFSET_LIMIT=100
fi

validate_offsets() {
for OFFSET in $1; do
if [[ ! $OFFSET =~ ^((0)|([1-9][0-9]{0,2}))$ ]]; then
echo "ERROR: All offsets must be positive integers (no leading 
zeros, max $OFFSET_LIMIT)"
exit $INVALID_OFFSETS_PROVIDED
fi
if [ ${OFFSET} -lt 0 ] ; then
echo "ERROR: Cannot start regionserver with negativ

[jira] [Commented] (HBASE-8793) Regionserver ubuntu's startup script return code always 0

2013-06-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13691963#comment-13691963
 ] 

Michael Czerwiński commented on HBASE-8793:
---

The paste did not work out well, try this: http://pastebin.com/cm9g3mtr

> Regionserver ubuntu's startup script return code always 0
> -
>
> Key: HBASE-8793
> URL: https://issues.apache.org/jira/browse/HBASE-8793
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.94.6
> Environment: Description:Ubuntu 12.04.2 LTS
> Hbase: 0.94.6+96-1.cdh4.3.0.p0.13~precise-cdh4.3.0
>Reporter: Michael Czerwiński
>Priority: Minor
>
> hbase-regionserver startup script always returns 0 (exit 0 at the end of the 
> script) this is wrong behaviour which causes issues when trying to recognise 
> true status of the service.
> Replacing it with 'exit $?' seems to fix the problem, looking at hbase master 
> return codes are assigned to RETVAL variable which is used with exit.
> Not sure if the problem exist in other versions.
> > /etc/init.d/hbase-regionserver.orig status
> hbase-regionserver is not running.
> > echo $?
> After fix:
> > /etc/init.d/hbase-regionserver status
> hbase-regionserver is not running.
> > echo $?
> 1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8793) Regionserver ubuntu's startup script return code always 0

2013-06-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13691985#comment-13691985
 ] 

Michael Czerwiński commented on HBASE-8793:
---

I understand, sounds good, if you can drop me a link that would be great, 
thanks again for your time.

> Regionserver ubuntu's startup script return code always 0
> -
>
> Key: HBASE-8793
> URL: https://issues.apache.org/jira/browse/HBASE-8793
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.94.6
> Environment: Description:Ubuntu 12.04.2 LTS
> Hbase: 0.94.6+96-1.cdh4.3.0.p0.13~precise-cdh4.3.0
>Reporter: Michael Czerwiński
>Priority: Minor
>
> hbase-regionserver startup script always returns 0 (exit 0 at the end of the 
> script) this is wrong behaviour which causes issues when trying to recognise 
> true status of the service.
> Replacing it with 'exit $?' seems to fix the problem, looking at hbase master 
> return codes are assigned to RETVAL variable which is used with exit.
> Not sure if the problem exist in other versions.
> > /etc/init.d/hbase-regionserver.orig status
> hbase-regionserver is not running.
> > echo $?
> After fix:
> > /etc/init.d/hbase-regionserver status
> hbase-regionserver is not running.
> > echo $?
> 1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-2502) HBase won't bind to designated interface when more than one network interface is available

2014-05-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990402#comment-13990402
 ] 

Robert Jäschke commented on HBASE-2502:
---

Is there a workaround or are there any plans to fix this issue? It is really a 
big problem for cluster setups with several network interfaces.

> HBase won't bind to designated interface when more than one network interface 
> is available
> --
>
> Key: HBASE-2502
> URL: https://issues.apache.org/jira/browse/HBASE-2502
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>
> See this message by Michael Segel up on the list: 
> http://www.mail-archive.com/hbase-user@hadoop.apache.org/msg10042.html
> This comes up from time to time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralBy

2014-05-06 Thread JIRA
André Kelpe created HBASE-8:
---

 Summary: non environment variable solution for 
"IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot 
access its superclass com.google.protobuf.LiteralByteString"
 Key: HBASE-8
 URL: https://issues.apache.org/jira/browse/HBASE-8
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.2
Reporter: André Kelpe


I am running into the problem described in 
https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a newer 
version within cascading.hbase (https://github.com/cascading/cascading.hbase).

One of the features of cascading.hbase is that you can use it from lingual 
(http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. lingual 
has a notion of providers, which are fat jars that we pull down dynamically at 
runtime. Those jars give users the ability to talk to any system or format from 
SQL. They are added to the classpath  programmatically before we submit jobs to 
a hadoop cluster.

Since lingual does not know upfront , which providers are going to be used in a 
given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
clunky and breaks the ease of use we had before. No other provider requires 
this right now.

It would be great to have a programmatical way to fix this, when using fat jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-05-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991026#comment-13991026
 ] 

André Kelpe commented on HBASE-8:
-

For the provider mechanism:

Lingual is a cascading app that submits itself to the cluster. You can use a 
provider to talk to different data formats/sources. You basically tell lingual 
in the table definition "this table is actually in Hbase, use  this jar file 
over there to talk to it". lingual itself does not really know about Hbase or 
any other format/system, but HDFS and delimited files. We create fat jars for 
those providers to keep the dependency fetching "sane". Only lingual uses those 
jars. We could def. make them shaded jars, but that will not work in this case, 
since protobuf is the mechanism to talk to the HBase cluster. 

Next to that, my understanding of the problem at hand is that it also breaks 
the classic hadoop jars with lib folders. For those we do not have any control, 
since our users are just going to use cascading.hbase, build a jar and submit 
it to the cluster.

> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-11118
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
> Fix For: 0.98.3
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)



[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14032427#comment-14032427
 ] 

André Kelpe commented on HBASE-8:
-

[~t...@lipcon.org] Cascading uses the JobClient to submit the jobs on the fly, 
but we don't call the hadoop shell wrapper for each job, like hive does. All 
submission is handled internally. The initial program gets started with 
hadoop/yarn jar myJar, and then we handle everything ourselves. In the lingual 
case, it is even more complicated, since we fetch jars dynamically at runtime 
and add them to the classpath of the jobs, which means we cannot know upfront, 
if the job is going to use cascading-hbase or not.

> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, 
> shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037318#comment-14037318
 ] 

André Kelpe commented on HBASE-8:
-

[~stack] : We do have the hbase-server deps in the jars. This is due to the 
fact that we are based on a fork of the mapred InputFormat/OutputFormat, which 
uses the TableSplit class to calculate the InputSplits. The problem is that we 
cannot use the mapreduce apis in cascading and our version of the Input and 
OutputFormat has evolved and we want to keep it for backwards compatibility.


If there is a way to get this working w/o the hbase-server deps, I am more than 
happy to drop it. The code is here, if that helps: 
https://github.com/Cascading/cascading.hbase

> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038719#comment-14038719
 ] 

André Kelpe commented on HBASE-8:
-

[~stack] I have tried getting this to work. I took the master, applied the 
patch, created a tarball, as explained here: 
http://hbase.apache.org/book/releasing.html#maven.release and tried to fire up 
a cluster. The regionservers come up cleanly, but the master is not available 
on http://host:60010.  I can't find anything in the logs. Is there something I 
overlooked?

> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8.bytestringer.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038932#comment-14038932
 ] 

André Kelpe commented on HBASE-8:
-

[~stack] I figured out how where the master web ui is now, but I fail to put 
working jars in my local maven repo. I can't find a way to resolve 
${compat.module} correctly. What is the exact invocation I need to get this 
working?

> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8.bytestringer.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041156#comment-14041156
 ] 

André Kelpe commented on HBASE-8:
-

Here is what I did:

mvn versions:set -DnewVersion=0.99-Cascading
mvn install -DskipTests site assembly:single -Prelease

I then took the binary tarball to deploy a cluster. Now if I want to rebuild 
the cascading.hbase module, it always fails with an error related to the 
compat.module. It seems the variable hasn't been expanded and that confuses 
gradle:




> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8.bytestringer.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041160#comment-14041160
 ] 

André Kelpe commented on HBASE-8:
-

Sorry, forgot the gradle error:

{code}
 $ gradle cascading-hbase-hadoop2-mr1:build
The TaskContainer.add() method has been deprecated and is scheduled to be 
removed in Gradle 2.0. Please use the create() method instead.
:cascading-hbase-hadoop2-mr1:compileJava

FAILURE: Build failed with an exception.

* What went wrong:
Could not resolve all dependencies for configuration 
':cascading-hbase-hadoop2-mr1:compile'.
> Could not resolve org.apache.hbase:${compat.module}:0.99-CASCADING.
  Required by:
  cascading-hbase:cascading-hbase-hadoop2-mr1:2.5.0-wip-dev > 
org.apache.hbase:hbase-server:0.99-CASCADING
  cascading-hbase:cascading-hbase-hadoop2-mr1:2.5.0-wip-dev > 
org.apache.hbase:hbase-server:0.99-CASCADING > 
org.apache.hbase:hbase-prefix-tree:0.99-CASCADING
   > java.lang.NullPointerException (no error message)
   > java.lang.IllegalArgumentException (no error message)
   > java.lang.IllegalArgumentException (no error message)
   > java.lang.IllegalArgumentException (no error message)

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 8.848 secs
{code}

> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8.bytestringer.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11426) Projeto o Hbase

2014-06-27 Thread JIRA
Lanylldo Codó created HBASE-11426:
-

 Summary: Projeto o Hbase
 Key: HBASE-11426
 URL: https://issues.apache.org/jira/browse/HBASE-11426
 Project: HBase
  Issue Type: Task
  Components: hadoop2
Affects Versions: hbase-7290
 Environment: teste hbase
Reporter: Lanylldo Codó
Priority: Trivial
 Fix For: hbase-10070






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11427) hbase

2014-06-27 Thread JIRA
Lanylldo Codó created HBASE-11427:
-

 Summary: hbase
 Key: HBASE-11427
 URL: https://issues.apache.org/jira/browse/HBASE-11427
 Project: HBase
  Issue Type: Task
  Components: hadoop2
Affects Versions: hbase-7290
 Environment: teste hbase
Reporter: Lanylldo Codó
Priority: Trivial
 Fix For: hbase-10070


projeto hbase



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11428) hbase projeto

2014-06-27 Thread JIRA
Lanylldo Codó created HBASE-11428:
-

 Summary: hbase projeto
 Key: HBASE-11428
 URL: https://issues.apache.org/jira/browse/HBASE-11428
 Project: HBase
  Issue Type: Task
  Components: hadoop2
Affects Versions: hbase-7290
 Environment: teste hbase
Reporter: Lanylldo Codó
Priority: Trivial
 Fix For: hbase-10070






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11429) projeto de Lanyllo teste

2014-06-27 Thread JIRA
Lanylldo Codó created HBASE-11429:
-

 Summary: projeto de Lanyllo teste
 Key: HBASE-11429
 URL: https://issues.apache.org/jira/browse/HBASE-11429
 Project: HBase
  Issue Type: Task
  Components: hadoop2
Affects Versions: hbase-7290
 Environment: teste hbase
Reporter: Lanylldo Codó
Priority: Trivial
 Fix For: hbase-10070


Teste Layla



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048300#comment-14048300
 ] 

André Kelpe commented on HBASE-8:
-

Sorry for not coming back to this. I haven't been able to test this, since I am 
busy with more important matters. If you could point me to a snapshot build, 
that I could use, I can give this a spin. I seem to be unable to build a custom 
HBase.

> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8.bytestringer.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10174) Back port HBASE-9667 'NullOutputStream removed from Guava 15' to 0.94

2013-12-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-10174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13850272#comment-13850272
 ] 

Kristoffer Sjögren commented on HBASE-10174:


The production environment is not modified at all. This happens when doing in 
container tests in our application and start a HMaster as part of those tests 
like so (plus some extra stuff):

{code:java}
new HMasterCommandLine(HMaster.class).run(new String[]{"start});
{code}
This means that tests (client side) will execute in the classloader as the 
HMaster. Here are the maven dependencies.
{code:xml}

  org.apache.hbase
  hbase
  0.94.7


  org.apache.hbase
  hbase
  0.94.7
  test-jar
  test


  org.apache.hadoop
  hadoop-core
  1.1.2


  org.apache.hadoop
  hadoop-test
  1.0.0
  test

{code}
Im not sure if the client or server breaks here. But it looks like its on the 
server side?

The thing is that we want HMaster lifecycle (starting, stopping, table creation 
etc) to be managed  _as part of the tests_ and not worry to much about it. This 
way test executions (developer and CI servers) can have their own isolated 
HBase. 

Im open for any suggestions on how to do this. 

> Back port HBASE-9667 'NullOutputStream removed from Guava 15' to 0.94
> -
>
> Key: HBASE-10174
>     URL: https://issues.apache.org/jira/browse/HBASE-10174
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.94.16
>
> Attachments: 10174-v2.txt, 10174-v3.txt, 9667-0.94.patch
>
>
> On user mailing list under the thread 'Guava 15', Kristoffer Sjögren reported 
> NoClassDefFoundError when he used Guava 15.
> The issue has been fixed in 0.96 + by HBASE-9667
> This JIRA ports the fix to 0.94 branch



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (HBASE-14330) Regular Expressions cause ipc.CallTimeoutException

2015-08-27 Thread JIRA
茂军王 created HBASE-14330:
---

 Summary: Regular Expressions cause  ipc.CallTimeoutException
 Key: HBASE-14330
 URL: https://issues.apache.org/jira/browse/HBASE-14330
 Project: HBase
  Issue Type: Bug
  Components: Client, Filters, IPC/RPC
Affects Versions: 1.0.1
 Environment: CDH5.4.0
hbase-client-1.0.0-cdh5.4.0
Reporter: 茂军王


Appear "ipc.CallTimeoutException" When I use scan with RowFilter.
The RowFilter use regular expression ".*_10_version$".
The below is my code:
public static void main(String[] args) {
Scan scan = new Scan();
scan.setStartRow("2014-12-01".getBytes());
scan.setStopRow("2015-01-01".getBytes());
/**
 * when rowPattern = "\\d{4}-\\d{2}-\\d{2}_10_version$" ,every 
thing is ok
 * when String rowPattern = ".*_10_version$" ,error:rpc timeout
 */
//String rowPattern = "\\d{4}-\\d{2}-\\d{2}_10_version$";
String rowPattern = ".*_10_version$";
Filter myRowfilter = new 
RowFilter(CompareFilter.CompareOp.EQUAL, 
new RegexStringComparator(rowPattern));
List myFilterList = new ArrayList();
myFilterList.add(myRowfilter);
FilterList filterList = new FilterList(myFilterList);
scan.setFilter(filterList);

TableName tn = TableName.valueOf("oneday");
Table t = null;
ResultScanner rs = null;
Long i = 0L;
try {
t = HBaseUtil.getHTable(tn);
rs = t.getScanner(scan);
Iterator iter = rs.iterator();

while(iter.hasNext()){
Result r = iter.next();
i++;
}
System.out.println(i);
} catch (IOException e) {
e.printStackTrace();
}finally{
HBaseUtil.closeTable(t);
}
}
The below is the error:
xception in thread "main" java.lang.RuntimeException: 
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=36, exceptions:
Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648

at 
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)
at com.jj.door.ScanFilterDoor.main(ScanFilterDoor.java:58)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
after attempts=36, exceptions:
Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648

at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:203)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:294)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:374)
at 
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
... 1 more
Caused by: java.net.SocketTimeoutException: callTimeout=6, 
callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j

[jira] [Updated] (HBASE-14330) Regular Expressions cause ipc.CallTimeoutException

2015-08-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-14330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

茂军王 updated HBASE-14330:

Description: 
Appear "ipc.CallTimeoutException" When I use scan with RowFilter. The RowFilter 
use regular expression ".*_10_version$".

The below is my code:
public static void main(String[] args) {
Scan scan = new Scan();
scan.setStartRow("2014-12-01".getBytes());
scan.setStopRow("2015-01-01".getBytes());
/**
 * when rowPattern = "\\d{4}-\\d{2}-\\d{2}_10_version$" ,every 
thing is ok
 * when String rowPattern = ".*_10_version$" ,error:rpc timeout
 */
//String rowPattern = "\\d{4}-\\d{2}-\\d{2}_10_version$";
String rowPattern = ".*_10_version$";
Filter myRowfilter = new 
RowFilter(CompareFilter.CompareOp.EQUAL, 
new RegexStringComparator(rowPattern));
List myFilterList = new ArrayList();
myFilterList.add(myRowfilter);
FilterList filterList = new FilterList(myFilterList);
scan.setFilter(filterList);

TableName tn = TableName.valueOf("oneday");
Table t = null;
ResultScanner rs = null;
Long i = 0L;
try {
t = HBaseUtil.getHTable(tn);
rs = t.getScanner(scan);
Iterator iter = rs.iterator();

while(iter.hasNext()){
Result r = iter.next();
i++;
}
System.out.println(i);
} catch (IOException e) {
e.printStackTrace();
}finally{
HBaseUtil.closeTable(t);
}
}
The below is the error:
xception in thread "main" java.lang.RuntimeException: 
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=36, exceptions:
Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648

at 
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)
at com.jj.door.ScanFilterDoor.main(ScanFilterDoor.java:58)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
after attempts=36, exceptions:
Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648

at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:203)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:294)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:374)
at 
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
... 1 more
Caused by: java.net.SocketTimeoutException: callTimeout=6, 
callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Call to tk-mapp-hadoop185/192.168.1

[jira] [Updated] (HBASE-14330) Regular Expressions cause ipc.CallTimeoutException

2015-08-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-14330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

茂军王 updated HBASE-14330:

Description: 
Appear "ipc.CallTimeoutException" When I use scan with RowFilter. The RowFilter 
use regular expression ".*_10_version$".

The below is my code:
public static void main(String[] args) {
Scan scan = new Scan();
scan.setStartRow("2014-12-01".getBytes());
scan.setStopRow("2015-01-01".getBytes());
/**
 * when rowPattern = "\\d{4}-\\d{2}-\\d{2}_10_version$" ,every 
thing is ok
 * when String rowPattern = ".*_10_version$" ,error:rpc timeout
 */
//String rowPattern = "\\d{4}-\\d{2}-\\d{2}_10_version$";
String rowPattern = ".*_10_version$";
Filter myRowfilter = new 
RowFilter(CompareFilter.CompareOp.EQUAL, 
new RegexStringComparator(rowPattern));
List myFilterList = new ArrayList();
myFilterList.add(myRowfilter);
FilterList filterList = new FilterList(myFilterList);
scan.setFilter(filterList);

TableName tn = TableName.valueOf("oneday");
Table t = null;
ResultScanner rs = null;
Long i = 0L;
try {
t = HBaseUtil.getHTable(tn);
rs = t.getScanner(scan);
Iterator iter = rs.iterator();

while(iter.hasNext()){
Result r = iter.next();
i++;
}
System.out.println(i);
} catch (IOException e) {
e.printStackTrace();
}finally{
HBaseUtil.closeTable(t);
}
}
The below is the error:
xception in thread "main" java.lang.RuntimeException: 
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=36, exceptions:
Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648

at 
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)
at com.jj.door.ScanFilterDoor.main(ScanFilterDoor.java:58)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
after attempts=36, exceptions:
Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648

at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:203)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:294)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:374)
at 
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
... 1 more
Caused by: java.net.SocketTimeoutException: callTimeout=6, 
callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Call to tk-mapp-hadoop185/192.168.10.185:60020 
failed on local exception: org.apache.hadoop

[jira] [Updated] (HBASE-14330) Regular Expressions cause ipc.CallTimeoutException

2015-08-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-14330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

茂军王 updated HBASE-14330:

Description: 
Appear "ipc.CallTimeoutException" When I use scan with RowFilter. The RowFilter 
use regular expression ".*_10_version$".

The below is my code:
public static void main(String[] args) {
Scan scan = new Scan();
scan.setStartRow("2014-12-01".getBytes());
scan.setStopRow("2015-01-01".getBytes());
/**
 * when rowPattern = "\\d{4}-\\d{2}-\\d{2}_10_version$" ,every thing is 
ok
 * when String rowPattern = ".*_10_version$" ,error:rpc timeout
 */
//String rowPattern = "\\d{4}-\\d{2}-\\d{2}_10_version$";
String rowPattern = ".*_10_version$";
Filter myRowfilter = new RowFilter(CompareFilter.CompareOp.EQUAL, 
new RegexStringComparator(rowPattern));
List myFilterList = new ArrayList();
myFilterList.add(myRowfilter);
FilterList filterList = new FilterList(myFilterList);
scan.setFilter(filterList);

TableName tn = TableName.valueOf("oneday");
Table t = null;
ResultScanner rs = null;
Long i = 0L;
try {
t = HBaseUtil.getHTable(tn);
rs = t.getScanner(scan);
Iterator iter = rs.iterator();

while(iter.hasNext()){
Result r = iter.next();
i++;
}
System.out.println(i);
} catch (IOException e) {
e.printStackTrace();
}finally{
HBaseUtil.closeTable(t);
}
}
The below is the error:
xception in thread "main" java.lang.RuntimeException: 
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=36, exceptions:
Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648

at 
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)
at com.jj.door.ScanFilterDoor.main(ScanFilterDoor.java:58)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
after attempts=36, exceptions:
Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648

at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:203)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:294)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:374)
at 
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
... 1 more
Caused by: java.net.SocketTimeoutException: callTimeout=6, 
callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Call to tk-mapp-hadoop185/192.168.10.185:60020 
failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: 
Call id=4, waitTime=60007, operationTimeout=6 expired.
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1236)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1204)
  

[jira] [Updated] (HBASE-14330) Regular Expressions cause ipc.CallTimeoutException

2015-08-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-14330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

茂军王 updated HBASE-14330:

Description: 
Appear "ipc.CallTimeoutException" When I use scan with RowFilter. The RowFilter 
use regular expression ".*_10_version$".

The below is my code:
public static void main(String[] args) {
Scan scan = new Scan();
scan.setStartRow("2014-12-01".getBytes());
scan.setStopRow("2015-01-01".getBytes());

String rowPattern = ".*_10_version$";
Filter myRowfilter = new RowFilter(CompareFilter.CompareOp.EQUAL, 
new RegexStringComparator(rowPattern));
List myFilterList = new ArrayList();
myFilterList.add(myRowfilter);
FilterList filterList = new FilterList(myFilterList);
scan.setFilter(filterList);

TableName tn = TableName.valueOf("oneday");
Table t = null;
ResultScanner rs = null;
Long i = 0L;
try {
t = HBaseUtil.getHTable(tn);
rs = t.getScanner(scan);
Iterator iter = rs.iterator();

while(iter.hasNext()){
Result r = iter.next();
i++;
}
System.out.println(i);
} catch (IOException e) {
e.printStackTrace();
}finally{
HBaseUtil.closeTable(t);
}
}
The below is the error:
xception in thread "main" java.lang.RuntimeException: 
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=36, exceptions:
Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648

at 
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)
at com.jj.door.ScanFilterDoor.main(ScanFilterDoor.java:58)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
after attempts=36, exceptions:
Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648

at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:203)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:294)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:374)
at 
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
... 1 more
Caused by: java.net.SocketTimeoutException: callTimeout=6, 
callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Call to tk-mapp-hadoop185/192.168.10.185:60020 
failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: 
Call id=4, waitTime=60007, operationTimeout=6 expired.
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1236)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1204)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
 

[jira] [Updated] (HBASE-14330) Regular Expressions cause ipc.CallTimeoutException

2015-08-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-14330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

茂军王 updated HBASE-14330:

Description: 
Appear "ipc.CallTimeoutException" When I use scan with RowFilter. The RowFilter 
use regular expression ".*_10_version$".

The below is my code:
public static void main(String[] args) {
Scan scan = new Scan();
scan.setStartRow("2014-12-01".getBytes());
scan.setStopRow("2015-01-01".getBytes());

String rowPattern = ".*_10_version$";
Filter myRowfilter = new RowFilter(CompareFilter.CompareOp.EQUAL, 
new RegexStringComparator(rowPattern));
List myFilterList = new ArrayList();
myFilterList.add(myRowfilter);
FilterList filterList = new FilterList(myFilterList);
scan.setFilter(filterList);

TableName tn = TableName.valueOf("oneday");
Table t = null;
ResultScanner rs = null;
Long i = 0L;
try {
t = HBaseUtil.getHTable(tn);
rs = t.getScanner(scan);
Iterator iter = rs.iterator();

while(iter.hasNext()){
Result r = iter.next();
i++;
}
System.out.println(i);
} catch (IOException e) {
e.printStackTrace();
}finally{
HBaseUtil.closeTable(t);
}
}


The below is the error:
xception in thread "main" java.lang.RuntimeException: 
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=36, exceptions:
Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648

at 
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)
at com.jj.door.ScanFilterDoor.main(ScanFilterDoor.java:58)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
after attempts=36, exceptions:
Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648

at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:203)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:294)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:374)
at 
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
... 1 more
Caused by: java.net.SocketTimeoutException: callTimeout=6, 
callDuration=60308: row '2014-12-01' on table 'oneday' at 
region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
 hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Call to tk-mapp-hadoop185/192.168.10.185:60020 
failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: 
Call id=4, waitTime=60007, operationTimeout=6 expired.
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1236)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1204)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
 

[jira] [Updated] (HBASE-14330) Regular Expressions cause ipc.CallTimeoutException

2015-08-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-14330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

茂军王 updated HBASE-14330:

Release Note: 
 when rowPattern = "\\d{4}-\\d{2}-\\d{2}_10_version$" ,every thing is ok

 when String rowPattern = ".*_10_version$" ,error:rpc timeout

  was:
 when rowPattern = "\\d{4}-\\d{2}-\\d{2}_10_version$" ,every thing is ok
 when String rowPattern = ".*_10_version$" ,error:rpc timeout


> Regular Expressions cause  ipc.CallTimeoutException
> ---
>
> Key: HBASE-14330
> URL: https://issues.apache.org/jira/browse/HBASE-14330
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Filters, IPC/RPC
>Affects Versions: 1.0.1
> Environment: CDH5.4.0
> hbase-client-1.0.0-cdh5.4.0
>Reporter: 茂军王
>  Labels: performance
>
> Appear "ipc.CallTimeoutException" When I use scan with RowFilter. The 
> RowFilter use regular expression ".*_10_version$".
> The below is my code:
> public static void main(String[] args) {
>   Scan scan = new Scan();
>   scan.setStartRow("2014-12-01".getBytes());
>   scan.setStopRow("2015-01-01".getBytes());
>   
>   String rowPattern = ".*_10_version$";
>   Filter myRowfilter = new RowFilter(CompareFilter.CompareOp.EQUAL, 
>   new RegexStringComparator(rowPattern));
>   List myFilterList = new ArrayList();
>   myFilterList.add(myRowfilter);
>   FilterList filterList = new FilterList(myFilterList);
>   scan.setFilter(filterList);
>   
>   TableName tn = TableName.valueOf("oneday");
>   Table t = null;
>   ResultScanner rs = null;
>   Long i = 0L;
>   try {
>   t = HBaseUtil.getHTable(tn);
>   rs = t.getScanner(scan);
>   Iterator iter = rs.iterator();
>   
>   while(iter.hasNext()){
>   Result r = iter.next();
>   i++;
>   }
>   System.out.println(i);
>   } catch (IOException e) {
>   e.printStackTrace();
>   }finally{
>   HBaseUtil.closeTable(t);
>   }
> }
> The below is the error:
> xception in thread "main" java.lang.RuntimeException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=36, exceptions:
> Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)
>   at com.jj.door.ScanFilterDoor.main(ScanFilterDoor.java:58)
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
> after attempts=36, exceptions:
> Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:203)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:294)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:374)
>   at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
>   ... 1 more
> Caused by: java.net.SocketTimeoutException: callTimeout=6, 
> callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   a

[jira] [Updated] (HBASE-14330) Regular Expressions cause ipc.CallTimeoutException

2015-08-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-14330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

茂军王 updated HBASE-14330:

Release Note: 
 when rowPattern = "\\d{4}-\\d{2}-\\d{2}_10_version$" ,every thing is ok
 when String rowPattern = ".*_10_version$" ,error:rpc timeout

  was:
 when rowPattern = "\\d{4}-\\d{2}-\\d{2}_10_version$" ,every thing is ok

 when String rowPattern = ".*_10_version$" ,error:rpc timeout


> Regular Expressions cause  ipc.CallTimeoutException
> ---
>
> Key: HBASE-14330
> URL: https://issues.apache.org/jira/browse/HBASE-14330
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Filters, IPC/RPC
>Affects Versions: 1.0.1
> Environment: CDH5.4.0
> hbase-client-1.0.0-cdh5.4.0
>Reporter: 茂军王
>  Labels: performance
>
> Appear "ipc.CallTimeoutException" When I use scan with RowFilter. The 
> RowFilter use regular expression ".*_10_version$".
> The below is my code:
> public static void main(String[] args) {
>   Scan scan = new Scan();
>   scan.setStartRow("2014-12-01".getBytes());
>   scan.setStopRow("2015-01-01".getBytes());
>   
>   String rowPattern = ".*_10_version$";
>   Filter myRowfilter = new RowFilter(CompareFilter.CompareOp.EQUAL, 
>   new RegexStringComparator(rowPattern));
>   List myFilterList = new ArrayList();
>   myFilterList.add(myRowfilter);
>   FilterList filterList = new FilterList(myFilterList);
>   scan.setFilter(filterList);
>   
>   TableName tn = TableName.valueOf("oneday");
>   Table t = null;
>   ResultScanner rs = null;
>   Long i = 0L;
>   try {
>   t = HBaseUtil.getHTable(tn);
>   rs = t.getScanner(scan);
>   Iterator iter = rs.iterator();
>   
>   while(iter.hasNext()){
>   Result r = iter.next();
>   i++;
>   }
>   System.out.println(i);
>   } catch (IOException e) {
>   e.printStackTrace();
>   }finally{
>   HBaseUtil.closeTable(t);
>   }
> }
> The below is the error:
> xception in thread "main" java.lang.RuntimeException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=36, exceptions:
> Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)
>   at com.jj.door.ScanFilterDoor.main(ScanFilterDoor.java:58)
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
> after attempts=36, exceptions:
> Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:203)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:294)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:374)
>   at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
>   ... 1 more
> Caused by: java.net.SocketTimeoutException: callTimeout=6, 
> callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   a

[jira] [Updated] (HBASE-14330) Regular Expressions cause ipc.CallTimeoutException

2015-08-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-14330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

茂军王 updated HBASE-14330:

Labels: performance  (was: test)

> Regular Expressions cause  ipc.CallTimeoutException
> ---
>
> Key: HBASE-14330
> URL: https://issues.apache.org/jira/browse/HBASE-14330
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Filters, IPC/RPC
>Affects Versions: 1.0.1
> Environment: CDH5.4.0
> hbase-client-1.0.0-cdh5.4.0
>Reporter: 茂军王
>  Labels: performance
>
> Appear "ipc.CallTimeoutException" When I use scan with RowFilter. The 
> RowFilter use regular expression ".*_10_version$".
> The below is my code:
> public static void main(String[] args) {
>   Scan scan = new Scan();
>   scan.setStartRow("2014-12-01".getBytes());
>   scan.setStopRow("2015-01-01".getBytes());
>   
>   String rowPattern = ".*_10_version$";
>   Filter myRowfilter = new RowFilter(CompareFilter.CompareOp.EQUAL, 
>   new RegexStringComparator(rowPattern));
>   List myFilterList = new ArrayList();
>   myFilterList.add(myRowfilter);
>   FilterList filterList = new FilterList(myFilterList);
>   scan.setFilter(filterList);
>   
>   TableName tn = TableName.valueOf("oneday");
>   Table t = null;
>   ResultScanner rs = null;
>   Long i = 0L;
>   try {
>   t = HBaseUtil.getHTable(tn);
>   rs = t.getScanner(scan);
>   Iterator iter = rs.iterator();
>   
>   while(iter.hasNext()){
>   Result r = iter.next();
>   i++;
>   }
>   System.out.println(i);
>   } catch (IOException e) {
>   e.printStackTrace();
>   }finally{
>   HBaseUtil.closeTable(t);
>   }
> }
> The below is the error:
> xception in thread "main" java.lang.RuntimeException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=36, exceptions:
> Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)
>   at com.jj.door.ScanFilterDoor.main(ScanFilterDoor.java:58)
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
> after attempts=36, exceptions:
> Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:203)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:294)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:374)
>   at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
>   ... 1 more
> Caused by: java.net.SocketTimeoutException: callTimeout=6, 
> callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
>   at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(Thr

[jira] [Commented] (HBASE-14330) Regular Expressions cause ipc.CallTimeoutException

2015-08-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-14330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14717841#comment-14717841
 ] 

茂军王 commented on HBASE-14330:
-

when rowPattern = "\\d{4}-\\d{2}-\\d{2}_10_version$" ,every thing is ok 
when String rowPattern = ".*_10_version$" ,error:rpc timeout

> Regular Expressions cause  ipc.CallTimeoutException
> ---
>
> Key: HBASE-14330
> URL: https://issues.apache.org/jira/browse/HBASE-14330
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Filters, IPC/RPC
>Affects Versions: 1.0.1
> Environment: CDH5.4.0
> hbase-client-1.0.0-cdh5.4.0
>Reporter: 茂军王
>  Labels: performance
>
> Appear "ipc.CallTimeoutException" When I use scan with RowFilter. The 
> RowFilter use regular expression ".*_10_version$".
> The below is my code:
> public static void main(String[] args) {
>   Scan scan = new Scan();
>   scan.setStartRow("2014-12-01".getBytes());
>   scan.setStopRow("2015-01-01".getBytes());
>   
>   String rowPattern = ".*_10_version$";
>   Filter myRowfilter = new RowFilter(CompareFilter.CompareOp.EQUAL, 
>   new RegexStringComparator(rowPattern));
>   List myFilterList = new ArrayList();
>   myFilterList.add(myRowfilter);
>   FilterList filterList = new FilterList(myFilterList);
>   scan.setFilter(filterList);
>   
>   TableName tn = TableName.valueOf("oneday");
>   Table t = null;
>   ResultScanner rs = null;
>   Long i = 0L;
>   try {
>   t = HBaseUtil.getHTable(tn);
>   rs = t.getScanner(scan);
>   Iterator iter = rs.iterator();
>   
>   while(iter.hasNext()){
>   Result r = iter.next();
>   i++;
>   }
>   System.out.println(i);
>   } catch (IOException e) {
>   e.printStackTrace();
>   }finally{
>   HBaseUtil.closeTable(t);
>   }
> }
> The below is the error:
> xception in thread "main" java.lang.RuntimeException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=36, exceptions:
> Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)
>   at com.jj.door.ScanFilterDoor.main(ScanFilterDoor.java:58)
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
> after attempts=36, exceptions:
> Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:203)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:294)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:374)
>   at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
>   ... 1 more
> Caused by: java.net.SocketTimeoutException: callTimeout=6, 
> callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
>   at 
> org.apache.hadoop

[jira] [Commented] (HBASE-14180) Change timeout - SocketTimeoutException because of callTimeout

2015-08-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-14180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14717888#comment-14717888
 ] 

茂军王 commented on HBASE-14180:
-

1   I meet the similar problem: 
https://issues.apache.org/jira/browse/HBASE-14330.

2   I change my regular expression to solve RPC Timeout problem.But I don`t 
know why.

3   Is this helpful for you ?

4   I look forward to your deep analysis for HBase Scanner  , I want to know 
what cause the problem,How to cause.

> Change timeout - SocketTimeoutException because of callTimeout
> --
>
> Key: HBASE-14180
> URL: https://issues.apache.org/jira/browse/HBASE-14180
> Project: HBase
>  Issue Type: Bug
>  Components: hbase, regionserver, rpc, Zookeeper
>Affects Versions: 1.1.1
> Environment: Hadoop with Ambari 2.1.0
> HBase 1.1.1.2.3
> HDFS 2.7.1.2.3
> Zookeeper 3.4.6.2.3
> Phoenix 4.4.0.2.3
>Reporter: Adrià V.
>
> HBase keeps throwing a timeout exception I have tryed every configuration I 
> could think about to increase it.
> Partial stacktrace:
> {quote}
> Caused by: java.io.IOException: Call to 
> hdp-w-1.c.dks-hadoop.internal/10.240.2.235:16020 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=43, waitTime=60001, 
> operationTimeout=6 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1242)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1210)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:213)
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:369)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:343)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
> ... 4 more
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=43, 
> waitTime=60001, operationTimeout=6 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1184)
> ... 13 more
> {quote}
> I've tryed editing config files and also setting config in Ambari with the 
> next keys to increase the timeout with no success:
> - hbase.rpc.timeout
> - dfs.socket.timeout
> - dfs.client.socket-timeout
> - zookeeper.session.timeout
> Also the Phoenix properties, but I think it's mostly an HBase issue:
> - phoenix.query.timeoutMs
> - phoenix.query.keepAliveMs
> Full stack trace: 
> {quote}
> Error: Encountered exception in sub plan [0] execution. (state=,code=0)
> java.sql.SQLException: Encountered exception in sub plan [0] execution.
> at 
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:157)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:251)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:241)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:240)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1250)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
> exceptions:
> Mon Aug 03 16:47:06 UTC 2015, null, java.net.SocketTimeoutException: 
> callTimeout=6, c

[jira] [Created] (HBASE-13934) HBase Client Stuck in at ConnectionManager$HConnectionImplementation.locateRegion

2015-06-18 Thread JIRA
Attila Tőkés created HBASE-13934:


 Summary: HBase Client Stuck in at 
ConnectionManager$HConnectionImplementation.locateRegion
 Key: HBASE-13934
 URL: https://issues.apache.org/jira/browse/HBASE-13934
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Attila Tőkés
Priority: Blocker


HBase Client get's stuck when I try to execute a PUT operation.

{code}
Thread [BenchmarkThread-0] (Suspended)  
owns: BufferedMutatorImpl  (id=43)  
Unsafe.park(boolean, long) line: not available [native method]  
LockSupport.park(Object) line: 186  
AbstractQueuedSynchronizer$ConditionObject.await() line: 2043   
ArrayBlockingQueue.take() line: 374  
BoundedCompletionService.take() line: 75 
ScannerCallableWithReplicas.call(int) line: 190 
ScannerCallableWithReplicas.call(int) line: 56  
RpcRetryingCaller.callWithoutRetries(RetryingCallable, int) line: 200 
ClientSmallReversedScanner.loadCache() line: 211
ClientSmallReversedScanner.next() line: 185 
ConnectionManager$HConnectionImplementation.locateRegionInMeta(TableName, 
byte[], boolean, boolean, int) line: 1200 
ConnectionManager$HConnectionImplementation.locateRegion(TableName, byte[], 
boolean, boolean, int) line: 1109   
AsyncProcess.submit(ExecutorService, TableName, List, boolean, 
Callback, boolean) line: 369   
AsyncProcess.submit(TableName, List, boolean, Callback, 
boolean) line: 320
BufferedMutatorImpl.backgroundFlushCommits(boolean) line: 206   
BufferedMutatorImpl.flush() line: 183   
HTable.flushCommits() line: 1436
HTable.put(Put) line: 1032  
HBaseClient.put(String, Map) line: 92
GenericColumnStoreBenchmark$ColumnFamilyBenchmarkTask.doOperation(String) 
line: 115 

GenericColumnStoreBenchmark$ColumnFamilyBenchmarkTask.init(ColumnStoreClient) 
line: 139 
GenericColumnStoreBenchmark$ColumnFamilyBenchmarkTask.init(DatabaseClient) 
line: 1  
MultiThreadedBenchmark$BenchmarkThread.doInit() line: 115   
MultiThreadedBenchmark$BenchmarkThread.run() line: 128  
{code}

Source code:

Connect:
{code}
this.config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", zookeeperHost);

Connection connection = ConnectionFactory.createConnection(config);
this.table = connection.getTable(TableName.valueOf(tableName));
{code}

Put:
{code}
final Put put = new Put(Bytes.toBytes(key));
for (Map.Entry pair : columnValues.entrySet()) {
final String column = pair.getKey();
final String value = pair.getValue();
put.addColumn(columnFamily, Bytes.toBytes(column), 
Bytes.toBytes(value));
}

try {
table.put(put);
} catch (IOException e) {
throw new ClientException("put error", e);
}
{code}

Client log:

{code}
17:00:58,193  INFO ZooKeeper:438 - Initiating client connection, 
connectString=nosql-x64-node-1.local:2181 sessionTimeout=9 
watcher=hconnection-0x3018fc1a0x0, quorum=nosql-x64-node-1.local:2181, 
baseZNode=/hbase
17:00:58,325  INFO ClientCnxn:975 - Opening socket connection to server 
192.168.56.201/192.168.56.201:2181. Will not attempt to authenticate using SASL 
(unknown error)
17:00:58,329  INFO ClientCnxn:852 - Socket connection established to 
192.168.56.201/192.168.56.201:2181, initiating session
17:00:58,346  INFO ClientCnxn:1235 - Session establishment complete on server 
192.168.56.201/192.168.56.201:2181, sessionid = 0x14e06dbd6450020, negotiated 
timeout = 4
{code}

Server's log:
{code}
2015-06-18 17:12:28,183 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] 
server.NIOServerCnxn: Closed socket connection for client /192.168.56.1:35002 
which had sessionid 0x14e06dbd6450020
2015-06-18 17:12:30,001 INFO  [SessionTracker] server.ZooKeeperServer: Expiring 
session 0x14e06dbd645001d, timeout of 4ms exceeded
2015-06-18 17:12:30,002 INFO  [ProcessThread(sid:0 cport:-1):] 
server.PrepRequestProcessor: Processed session termination for sessionid: 
0x14e06dbd645001d
2015-06-18 17:12:31,078 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] 
server.NIOServerCnxnFactory: Accepted socket connection from /192.168.56.1:35130
2015-06-18 17:12:31,080 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] 
server.ZooKeeperServer: Client attempting to establish new session at 
/192.168.56.1:35130
2015-06-18 17:12:31,092 INFO  [SyncThread:0] server.ZooKeeperServer: 
Established session 0x14e06dbd6450021 with negotiated timeout 4 for client 
/192.168.56.1:35130
{code}

Happens both with HBASE running in standalone and distributed mode.

Any idea what causing this?

HBase version: 1.0.1 (client + server)






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13934) HBase Client Stuck at ConnectionManager$HConnectionImplementation.locateRegion

2015-06-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-13934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Tőkés updated HBASE-13934:
-
Summary: HBase Client Stuck at 
ConnectionManager$HConnectionImplementation.locateRegion  (was: HBase Client 
Stuck in at ConnectionManager$HConnectionImplementation.locateRegion)

> HBase Client Stuck at ConnectionManager$HConnectionImplementation.locateRegion
> --
>
> Key: HBASE-13934
> URL: https://issues.apache.org/jira/browse/HBASE-13934
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Attila Tőkés
>Priority: Blocker
>
> HBase Client get's stuck when I try to execute a PUT operation.
> {code}
> Thread [BenchmarkThread-0] (Suspended)  
> owns: BufferedMutatorImpl  (id=43)  
> Unsafe.park(boolean, long) line: not available [native method]  
> LockSupport.park(Object) line: 186  
> AbstractQueuedSynchronizer$ConditionObject.await() line: 2043   
> ArrayBlockingQueue.take() line: 374  
> BoundedCompletionService.take() line: 75 
> ScannerCallableWithReplicas.call(int) line: 190 
> ScannerCallableWithReplicas.call(int) line: 56  
> RpcRetryingCaller.callWithoutRetries(RetryingCallable, int) line: 
> 200 
> ClientSmallReversedScanner.loadCache() line: 211
> ClientSmallReversedScanner.next() line: 185 
> ConnectionManager$HConnectionImplementation.locateRegionInMeta(TableName, 
> byte[], boolean, boolean, int) line: 1200 
> ConnectionManager$HConnectionImplementation.locateRegion(TableName, 
> byte[], boolean, boolean, int) line: 1109   
> AsyncProcess.submit(ExecutorService, TableName, List, boolean, 
> Callback, boolean) line: 369   
> AsyncProcess.submit(TableName, List, boolean, Callback, 
> boolean) line: 320
> BufferedMutatorImpl.backgroundFlushCommits(boolean) line: 206   
> BufferedMutatorImpl.flush() line: 183   
> HTable.flushCommits() line: 1436
> HTable.put(Put) line: 1032  
> HBaseClient.put(String, Map) line: 92
> GenericColumnStoreBenchmark$ColumnFamilyBenchmarkTask.doOperation(String) 
> line: 115 
> 
> GenericColumnStoreBenchmark$ColumnFamilyBenchmarkTask.init(ColumnStoreClient) 
> line: 139 
> 
> GenericColumnStoreBenchmark$ColumnFamilyBenchmarkTask.init(DatabaseClient) 
> line: 1  
> MultiThreadedBenchmark$BenchmarkThread.doInit() line: 115   
> MultiThreadedBenchmark$BenchmarkThread.run() line: 128  
> {code}
> Source code:
> Connect:
> {code}
> this.config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", zookeeperHost);
> Connection connection = ConnectionFactory.createConnection(config);
> this.table = connection.getTable(TableName.valueOf(tableName));
> {code}
> Put:
> {code}
> final Put put = new Put(Bytes.toBytes(key));
> for (Map.Entry pair : columnValues.entrySet()) {
> final String column = pair.getKey();
> final String value = pair.getValue();
> put.addColumn(columnFamily, Bytes.toBytes(column), 
> Bytes.toBytes(value));
> }
> try {
> table.put(put);
> } catch (IOException e) {
> throw new ClientException("put error", e);
> }
> {code}
> Client log:
> {code}
> 17:00:58,193  INFO ZooKeeper:438 - Initiating client connection, 
> connectString=nosql-x64-node-1.local:2181 sessionTimeout=9 
> watcher=hconnection-0x3018fc1a0x0, quorum=nosql-x64-node-1.local:2181, 
> baseZNode=/hbase
> 17:00:58,325  INFO ClientCnxn:975 - Opening socket connection to server 
> 192.168.56.201/192.168.56.201:2181. Will not attempt to authenticate using 
> SASL (unknown error)
> 17:00:58,329  INFO ClientCnxn:852 - Socket connection established to 
> 192.168.56.201/192.168.56.201:2181, initiating session
> 17:00:58,346  INFO ClientCnxn:1235 - Session establishment complete on server 
> 192.168.56.201/192.168.56.201:2181, sessionid = 0x14e06dbd6450020, negotiated 
> timeout = 4
> {code}
> Server's log:
> {code}
> 2015-06-18 17:12:28,183 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] 
> server.NIOServerCnxn: Closed socket connection for client /192.168.56.1:35002 
> which had sessionid 0x14e06dbd6450020
> 2015-06-18 17:12:30,001 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x14e06dbd645001d, timeout of 4ms exceeded
> 2015-06-18 17:12:30,002 INFO  [ProcessThread(sid:0 cport:-1):] 
> server.Prep

[jira] [Updated] (HBASE-13934) HBase Client Stuck at ConnectionManager$HConnectionImplementation.locateRegion

2015-06-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-13934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Tőkés updated HBASE-13934:
-
Description: 
HBase Client get's stuck when I try to execute a PUT operation.

{code}
Thread [BenchmarkThread-0] (Suspended)  
owns: BufferedMutatorImpl  (id=43)  
Unsafe.park(boolean, long) line: not available [native method]  
LockSupport.park(Object) line: 186  
AbstractQueuedSynchronizer$ConditionObject.await() line: 2043   
ArrayBlockingQueue.take() line: 374  
BoundedCompletionService.take() line: 75 
ScannerCallableWithReplicas.call(int) line: 190 
ScannerCallableWithReplicas.call(int) line: 56  
RpcRetryingCaller.callWithoutRetries(RetryingCallable, int) line: 200 
ClientSmallReversedScanner.loadCache() line: 211
ClientSmallReversedScanner.next() line: 185 
ConnectionManager$HConnectionImplementation.locateRegionInMeta(TableName, 
byte[], boolean, boolean, int) line: 1200 
ConnectionManager$HConnectionImplementation.locateRegion(TableName, byte[], 
boolean, boolean, int) line: 1109   
AsyncProcess.submit(ExecutorService, TableName, List, boolean, 
Callback, boolean) line: 369   
AsyncProcess.submit(TableName, List, boolean, Callback, 
boolean) line: 320
BufferedMutatorImpl.backgroundFlushCommits(boolean) line: 206   
BufferedMutatorImpl.flush() line: 183   
HTable.flushCommits() line: 1436
HTable.put(Put) line: 1032  
{code}

Source code:

Connect:
{code}
this.config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", zookeeperHost);

Connection connection = ConnectionFactory.createConnection(config);
this.table = connection.getTable(TableName.valueOf(tableName));
{code}

Put:
{code}
final Put put = new Put(Bytes.toBytes(key));
for (Map.Entry pair : columnValues.entrySet()) {
final String column = pair.getKey();
final String value = pair.getValue();
put.addColumn(columnFamily, Bytes.toBytes(column), 
Bytes.toBytes(value));
}

try {
table.put(put);
} catch (IOException e) {
throw new ClientException("put error", e);
}
{code}

Client log:

{code}
17:00:58,193  INFO ZooKeeper:438 - Initiating client connection, 
connectString=nosql-x64-node-1.local:2181 sessionTimeout=9 
watcher=hconnection-0x3018fc1a0x0, quorum=nosql-x64-node-1.local:2181, 
baseZNode=/hbase
17:00:58,325  INFO ClientCnxn:975 - Opening socket connection to server 
192.168.56.201/192.168.56.201:2181. Will not attempt to authenticate using SASL 
(unknown error)
17:00:58,329  INFO ClientCnxn:852 - Socket connection established to 
192.168.56.201/192.168.56.201:2181, initiating session
17:00:58,346  INFO ClientCnxn:1235 - Session establishment complete on server 
192.168.56.201/192.168.56.201:2181, sessionid = 0x14e06dbd6450020, negotiated 
timeout = 4
{code}

Server's log:
{code}
2015-06-18 17:12:28,183 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] 
server.NIOServerCnxn: Closed socket connection for client /192.168.56.1:35002 
which had sessionid 0x14e06dbd6450020
2015-06-18 17:12:30,001 INFO  [SessionTracker] server.ZooKeeperServer: Expiring 
session 0x14e06dbd645001d, timeout of 4ms exceeded
2015-06-18 17:12:30,002 INFO  [ProcessThread(sid:0 cport:-1):] 
server.PrepRequestProcessor: Processed session termination for sessionid: 
0x14e06dbd645001d
2015-06-18 17:12:31,078 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] 
server.NIOServerCnxnFactory: Accepted socket connection from /192.168.56.1:35130
2015-06-18 17:12:31,080 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] 
server.ZooKeeperServer: Client attempting to establish new session at 
/192.168.56.1:35130
2015-06-18 17:12:31,092 INFO  [SyncThread:0] server.ZooKeeperServer: 
Established session 0x14e06dbd6450021 with negotiated timeout 4 for client 
/192.168.56.1:35130
{code}

Happens both with HBASE running in standalone and distributed mode.

Any idea what causing this?

HBase version: 1.0.1 (client + server)




  was:
HBase Client get's stuck when I try to execute a PUT operation.

{code}
Thread [BenchmarkThread-0] (Suspended)  
owns: BufferedMutatorImpl  (id=43)  
Unsafe.park(boolean, long) line: not available [native method]  
LockSupport.park(Object) line: 186  
AbstractQueuedSynchronizer$ConditionObject.await() line: 2043   
ArrayBlockingQueue.take() line: 374  
BoundedCompletionService.take() line: 75 
ScannerCallableWithReplicas.call(int) line: 190 
ScannerCallableWithReplicas.call(int) line: 56  
RpcRetryingCaller.callWithoutRetries(RetryingCallable, int) line: 200 
ClientSmallReversedScanner.loadCache() line: 211
ClientSmallReversedScanner.next() line: 185 
ConnectionManager$HConnectionImplementation.locateRegionInMeta(TableN

[jira] [Updated] (HBASE-13934) HBase Client Stuck at ConnectionManager$HConnectionImplementation.locateRegion

2015-06-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-13934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Tőkés updated HBASE-13934:
-
Affects Version/s: 1.0.1

> HBase Client Stuck at ConnectionManager$HConnectionImplementation.locateRegion
> --
>
> Key: HBASE-13934
> URL: https://issues.apache.org/jira/browse/HBASE-13934
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.1
>Reporter: Attila Tőkés
>Priority: Blocker
>
> HBase Client get's stuck when I try to execute a PUT operation.
> {code}
> Thread [BenchmarkThread-0] (Suspended)  
> owns: BufferedMutatorImpl  (id=43)  
> Unsafe.park(boolean, long) line: not available [native method]  
> LockSupport.park(Object) line: 186  
> AbstractQueuedSynchronizer$ConditionObject.await() line: 2043   
> ArrayBlockingQueue.take() line: 374  
> BoundedCompletionService.take() line: 75 
> ScannerCallableWithReplicas.call(int) line: 190 
> ScannerCallableWithReplicas.call(int) line: 56  
> RpcRetryingCaller.callWithoutRetries(RetryingCallable, int) line: 
> 200 
> ClientSmallReversedScanner.loadCache() line: 211
> ClientSmallReversedScanner.next() line: 185 
> ConnectionManager$HConnectionImplementation.locateRegionInMeta(TableName, 
> byte[], boolean, boolean, int) line: 1200 
> ConnectionManager$HConnectionImplementation.locateRegion(TableName, 
> byte[], boolean, boolean, int) line: 1109   
> AsyncProcess.submit(ExecutorService, TableName, List, boolean, 
> Callback, boolean) line: 369   
> AsyncProcess.submit(TableName, List, boolean, Callback, 
> boolean) line: 320
> BufferedMutatorImpl.backgroundFlushCommits(boolean) line: 206   
> BufferedMutatorImpl.flush() line: 183   
> HTable.flushCommits() line: 1436
> HTable.put(Put) line: 1032  
> {code}
> Source code:
> Connect:
> {code}
> this.config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", zookeeperHost);
> Connection connection = ConnectionFactory.createConnection(config);
> this.table = connection.getTable(TableName.valueOf(tableName));
> {code}
> Put:
> {code}
> final Put put = new Put(Bytes.toBytes(key));
> for (Map.Entry pair : columnValues.entrySet()) {
> final String column = pair.getKey();
> final String value = pair.getValue();
> put.addColumn(columnFamily, Bytes.toBytes(column), 
> Bytes.toBytes(value));
> }
> try {
> table.put(put);
> } catch (IOException e) {
> throw new ClientException("put error", e);
> }
> {code}
> Client log:
> {code}
> 17:00:58,193  INFO ZooKeeper:438 - Initiating client connection, 
> connectString=nosql-x64-node-1.local:2181 sessionTimeout=9 
> watcher=hconnection-0x3018fc1a0x0, quorum=nosql-x64-node-1.local:2181, 
> baseZNode=/hbase
> 17:00:58,325  INFO ClientCnxn:975 - Opening socket connection to server 
> 192.168.56.201/192.168.56.201:2181. Will not attempt to authenticate using 
> SASL (unknown error)
> 17:00:58,329  INFO ClientCnxn:852 - Socket connection established to 
> 192.168.56.201/192.168.56.201:2181, initiating session
> 17:00:58,346  INFO ClientCnxn:1235 - Session establishment complete on server 
> 192.168.56.201/192.168.56.201:2181, sessionid = 0x14e06dbd6450020, negotiated 
> timeout = 4
> {code}
> Server's log:
> {code}
> 2015-06-18 17:12:28,183 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] 
> server.NIOServerCnxn: Closed socket connection for client /192.168.56.1:35002 
> which had sessionid 0x14e06dbd6450020
> 2015-06-18 17:12:30,001 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x14e06dbd645001d, timeout of 4ms exceeded
> 2015-06-18 17:12:30,002 INFO  [ProcessThread(sid:0 cport:-1):] 
> server.PrepRequestProcessor: Processed session termination for sessionid: 
> 0x14e06dbd645001d
> 2015-06-18 17:12:31,078 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] 
> server.NIOServerCnxnFactory: Accepted socket connection from 
> /192.168.56.1:35130
> 2015-06-18 17:12:31,080 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] 
> server.ZooKeeperServer: Client attempting to establish new session at 
> /192.168.56.1:35130
> 2015-06-18 17:12:31,092 INFO  [SyncThread:0] server.ZooKeeperServer: 
> Established session 0x14e06dbd6450021 with negotiated timeout 4 for 
> client /192.168.56.1:35130
> {code}
> Happens both with HBASE running in standalone and distributed mode.
> Any idea what causing this?
> HBase version: 1.0.1 (client + server)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-13934) HBase Client Stuck at ConnectionManager$HConnectionImplementation.locateRegion

2015-06-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-13934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Tőkés resolved HBASE-13934.
--
Resolution: Not A Problem

Seems that the client tried to connect to {{nosql-x64-node-1}}, even if set 
{{nosql-x64-node-1.local}}.


> HBase Client Stuck at ConnectionManager$HConnectionImplementation.locateRegion
> --
>
> Key: HBASE-13934
> URL: https://issues.apache.org/jira/browse/HBASE-13934
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.1
>Reporter: Attila Tőkés
>Priority: Blocker
>
> HBase Client get's stuck when I try to execute a PUT operation.
> {code}
> Thread [BenchmarkThread-0] (Suspended)  
> owns: BufferedMutatorImpl  (id=43)  
> Unsafe.park(boolean, long) line: not available [native method]  
> LockSupport.park(Object) line: 186  
> AbstractQueuedSynchronizer$ConditionObject.await() line: 2043   
> ArrayBlockingQueue.take() line: 374  
> BoundedCompletionService.take() line: 75 
> ScannerCallableWithReplicas.call(int) line: 190 
> ScannerCallableWithReplicas.call(int) line: 56  
> RpcRetryingCaller.callWithoutRetries(RetryingCallable, int) line: 
> 200 
> ClientSmallReversedScanner.loadCache() line: 211
> ClientSmallReversedScanner.next() line: 185 
> ConnectionManager$HConnectionImplementation.locateRegionInMeta(TableName, 
> byte[], boolean, boolean, int) line: 1200 
> ConnectionManager$HConnectionImplementation.locateRegion(TableName, 
> byte[], boolean, boolean, int) line: 1109   
> AsyncProcess.submit(ExecutorService, TableName, List, boolean, 
> Callback, boolean) line: 369   
> AsyncProcess.submit(TableName, List, boolean, Callback, 
> boolean) line: 320
> BufferedMutatorImpl.backgroundFlushCommits(boolean) line: 206   
> BufferedMutatorImpl.flush() line: 183   
> HTable.flushCommits() line: 1436
> HTable.put(Put) line: 1032  
> {code}
> Source code:
> Connect:
> {code}
> this.config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", zookeeperHost);
> Connection connection = ConnectionFactory.createConnection(config);
> this.table = connection.getTable(TableName.valueOf(tableName));
> {code}
> Put:
> {code}
> final Put put = new Put(Bytes.toBytes(key));
> for (Map.Entry pair : columnValues.entrySet()) {
> final String column = pair.getKey();
> final String value = pair.getValue();
> put.addColumn(columnFamily, Bytes.toBytes(column), 
> Bytes.toBytes(value));
> }
> try {
> table.put(put);
> } catch (IOException e) {
> throw new ClientException("put error", e);
> }
> {code}
> Client log:
> {code}
> 17:00:58,193  INFO ZooKeeper:438 - Initiating client connection, 
> connectString=nosql-x64-node-1.local:2181 sessionTimeout=9 
> watcher=hconnection-0x3018fc1a0x0, quorum=nosql-x64-node-1.local:2181, 
> baseZNode=/hbase
> 17:00:58,325  INFO ClientCnxn:975 - Opening socket connection to server 
> 192.168.56.201/192.168.56.201:2181. Will not attempt to authenticate using 
> SASL (unknown error)
> 17:00:58,329  INFO ClientCnxn:852 - Socket connection established to 
> 192.168.56.201/192.168.56.201:2181, initiating session
> 17:00:58,346  INFO ClientCnxn:1235 - Session establishment complete on server 
> 192.168.56.201/192.168.56.201:2181, sessionid = 0x14e06dbd6450020, negotiated 
> timeout = 4
> {code}
> Server's log:
> {code}
> 2015-06-18 17:12:28,183 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] 
> server.NIOServerCnxn: Closed socket connection for client /192.168.56.1:35002 
> which had sessionid 0x14e06dbd6450020
> 2015-06-18 17:12:30,001 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x14e06dbd645001d, timeout of 4ms exceeded
> 2015-06-18 17:12:30,002 INFO  [ProcessThread(sid:0 cport:-1):] 
> server.PrepRequestProcessor: Processed session termination for sessionid: 
> 0x14e06dbd645001d
> 2015-06-18 17:12:31,078 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] 
> server.NIOServerCnxnFactory: Accepted socket connection from 
> /192.168.56.1:35130
> 2015-06-18 17:12:31,080 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] 
> server.ZooKeeperServer: Client attempting to establish new session at 
> /192.168.56.1:35130
> 2015-06-18 17:12:31,092 INFO  [SyncThread:0] server.ZooKeeperServer: 
> Established session 0x14e06dbd6450021 with negotiated timeout 4 for 
> client /192.168.56.1:35130
> {code}
> Happens both with HBASE running in standalone and distributed mode.
> Any idea what causing this?
> HBase version: 1.0.1 (client + server)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14180) Change timeout - SocketTimeoutException because of callTimeout

2015-08-03 Thread JIRA
Adrià V. created HBASE-14180:


 Summary: Change timeout - SocketTimeoutException because of 
callTimeout
 Key: HBASE-14180
 URL: https://issues.apache.org/jira/browse/HBASE-14180
 Project: HBase
  Issue Type: Bug
  Components: hbase, regionserver, rpc, Zookeeper
Affects Versions: 1.1.1
 Environment: Hadoop with Ambari 2.1.0
HBase 1.1.1.2.3
HDFS 2.7.1.2.3
Zookeeper 3.4.6.2.3
Phoenix 4.4.0.2.3
Reporter: Adrià V.


HBase keeps throwing a timeout exception I have tryed every configuration I 
could think about to increase it.

Partial stacktrace:
{quote}
Caused by: java.io.IOException: Call to 
hdp-w-1.c.dks-hadoop.internal/10.240.2.235:16020 failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=43, waitTime=60001, 
operationTimeout=6 expired.
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1242)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1210)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:213)
at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:369)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:343)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more
Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=43, 
waitTime=60001, operationTimeout=6 expired.
at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1184)
... 13 more
{quote}

I've tryed editing config files and also setting config in Ambari with the next 
keys to increase the timeout with no success:
- hbase.rpc.timeout
- dfs.socket.timeout
- dfs.client.socket-timeout
- zookeeper.session.timeout

Also the Phoenix properties, but I think it's mostly an HBase issue:
- phoenix.query.timeoutMs
- phoenix.query.keepAliveMs

Full stack trace: 
{quote}
Error: Encountered exception in sub plan [0] execution. (state=,code=0)
java.sql.SQLException: Encountered exception in sub plan [0] execution.
at 
org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:157)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:251)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:241)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:240)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1250)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
exceptions:
Mon Aug 03 16:47:06 UTC 2015, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=60303: row '' on table 'hive_post_topics' at 
region=hive_post_topics,,1438084107396.cdbdc246ff0b7dfed31d481e0bccd2b5., 
hostname=hdp-w-1.c.dks-hadoop.internal,16020,1438619912282, seqNum=45322

at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:542)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
at 
org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:106)
at 
org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:82)
at 
org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:339)
at org.apache.phoenix.execute.HashJoinPlan$1.call(Has

[jira] [Created] (HBASE-12248) broken link in hbase shell help

2014-10-13 Thread JIRA
André Kelpe created HBASE-12248:
---

 Summary: broken link in hbase shell help
 Key: HBASE-12248
 URL: https://issues.apache.org/jira/browse/HBASE-12248
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.98.6.1, 0.99.0
Reporter: André Kelpe
Priority: Minor


The help in hbase shell ends with these sentences: 

"The HBase shell is the (J)Ruby IRB with the above HBase-specific commands 
added.
For more on the HBase Shell, see http://hbase.apache.org/docs/current/book.html";

The link to the book leads to a 404 and should be 
http://hbase.apache.org/book.html instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12248) broken link in hbase shell help

2014-10-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HBASE-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Kelpe updated HBASE-12248:

Attachment: HBASE-12248.patch

> broken link in hbase shell help
> ---
>
> Key: HBASE-12248
> URL: https://issues.apache.org/jira/browse/HBASE-12248
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.99.0, 0.98.6.1
>Reporter: André Kelpe
>Priority: Minor
> Attachments: HBASE-12248.patch
>
>
> The help in hbase shell ends with these sentences: 
> "The HBase shell is the (J)Ruby IRB with the above HBase-specific commands 
> added.
> For more on the HBase Shell, see 
> http://hbase.apache.org/docs/current/book.html";
> The link to the book leads to a 404 and should be 
> http://hbase.apache.org/book.html instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12248) broken link in hbase shell help

2014-10-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170216#comment-14170216
 ] 

André Kelpe commented on HBASE-12248:
-

I have attached a patch to fix it.

> broken link in hbase shell help
> ---
>
> Key: HBASE-12248
> URL: https://issues.apache.org/jira/browse/HBASE-12248
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.99.0, 0.98.6.1
>Reporter: André Kelpe
>Priority: Minor
> Attachments: HBASE-12248.patch
>
>
> The help in hbase shell ends with these sentences: 
> "The HBase shell is the (J)Ruby IRB with the above HBase-specific commands 
> added.
> For more on the HBase Shell, see 
> http://hbase.apache.org/docs/current/book.html";
> The link to the book leads to a 404 and should be 
> http://hbase.apache.org/book.html instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   7   8   9   10   >