HiI found the CDH5 yarn always odded one unavailable process after runing
mapreduce for each time.the original yarn process still be there.such as:3472
ThriftServer
3134 NodeManager
3322 HRegionServer
4383 -- process information unavailable
4595 Jps
2978 DataNode
I found under the
Hi all,
I try to use Hbase 0.96 version with hadoop 2.4.0. But I found it's
jar in maven central repo. Where can I find jar of hbase 0.96 in maven
repo ?
Thanks
--
Talat UYARER
Websitesi: http://talat.uyarer.com
Twitter: http://twitter.com/talatuyarer
Linkedin:
Maybe this is what you are looking for Talat?
http://mvnrepository.com/artifact/org.apache.hbase/hbase/0.96.0-hadoop2
On May 22, 2014 9:38 AM, Talat Uyarer ta...@uyarer.com wrote:
Hi all,
I try to use Hbase 0.96 version with hadoop 2.4.0. But I found it's
jar in maven central repo. Where
When I add this pom code, maven didn't get hbase's jar. You can see
on
http://search.maven.org/#artifactdetails%7Corg.apache.hbase%7Chbase%7C0.96.2-hadoop2%7Cpom
it has just ony pom files. But it hasn't any jars.
2014-05-22 10:47 GMT+03:00 Renato Marroquín Mogrovejo
renatoj.marroq...@gmail.com:
Dear Talat,
Have you tried to go up one level in the Central Repository? They still
have jars for each module (e.g. hbase-client if you're trying to use the
API): http://search.maven.org/#browse%7C359444332
All the best,
Dima
On Thu, May 22, 2014 at 12:55 AM, Talat Uyarer ta...@uyarer.com
Hi,
I am not able to checkout the code from github (using TortoiseGit in
Windows). Any clue why ? Any access permissions required ?
git.exe clone --recursive--progress -v
https://git-wip-us.apache.org/repos/asf/hbase.git;
D:\HADOOP\hbase\GIT\apache.svn.latest.trunk\hbase
I'm pleased to announce that Apache Phoenix has graduated from the
incubator to become a top level project. Thanks so much for all your help
and support - we couldn't have done it without the fantastic HBase
community! We're looking forward to continued collaboration.
Regards,
The Apache Phoenix
Congratulations.
On Thu, May 22, 2014 at 2:46 PM, James Taylor jamestay...@apache.orgwrote:
I'm pleased to announce that Apache Phoenix has graduated from the
incubator to become a top level project. Thanks so much for all your help
and support - we couldn't have done it without the
Oh, nice!!!
Congratulations to all the Phoenix team!!!
JM
Le 2014-05-22 17:46, James Taylor jamestay...@apache.org a écrit :
I'm pleased to announce that Apache Phoenix has graduated from the
incubator to become a top level project. Thanks so much for all your help
and support - we couldn't
it just say the value is get from FileInfo struct ,i do not see any
algorithm of caculate AVG_KEY_LEN.
On Thu, May 22, 2014 at 10:08 AM, Ted Yu yuzhih...@gmail.com wrote:
Please take a look at HFileReaderV2 ctor (line 159 in trunk):
avgKeyLen =
Great, congratulations
On Fri, May 23, 2014 at 5:47 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Oh, nice!!!
Congratulations to all the Phoenix team!!!
JM
Le 2014-05-22 17:46, James Taylor jamestay...@apache.org a écrit :
I'm pleased to announce that Apache Phoenix has
See the following in AbstractHFileWriter:
int avgKeyLen =
entryCount == 0 ? 0 : (int) (totalKeyLength / entryCount);
fileInfo.append(FileInfo.AVG_KEY_LEN, Bytes.toBytes(avgKeyLen), false);
On Thu, May 22, 2014 at 5:43 PM, ch huang justlo...@gmail.com wrote:
it just say the
Can you share your hbase-env.sh and hbase-site.xml? And hardware specs of your
cluster?
Regards,
Dhaval
From: Flavio Pompermaier pomperma...@okkam.it
To: user@hbase.apache.org
Sent: Saturday, 17 May 2014 2:49 AM
Subject: Re: HBase cluster design
Could
Hi Jingych,
This is the HBase 0.96.2 code. Not CDH5 specific. Do you have more details
on you OOME? Did you figured why it occured?
JM
2014-05-22 1:20 GMT-04:00 jingych jing...@neusoft.com:
Hello, everyone!
I found the CDH5 hbase client swallowed the Outofmemory exception.
It didn't
Hi Vinay,
But regarding my earlier concern, calling get in case of delete request,
in order to check the time stamp, can we not query hbase directly using the
scanner avoiding the CP hooks..??
If you configure a CP to be called, HBase will call it. There isn't really
any way to avoid that. You
Great! Congratulations!
On May 22, 2014 2:46:16 PM PDT, James Taylor jamestay...@apache.org wrote:
I'm pleased to announce that Apache Phoenix has graduated from the
incubator to become a top level project. Thanks so much for all your
help
and support - we couldn't have done it without the
Hi, JM everyone!
Thanks for your reply!
My java client, use the HTable#put(Put) method with 2M buffer to commit rows.
While the client running for a while, the jvisualvm shows the htable threads
increasing and the heap size increased too.
Here is the snapshot:
i research the implementation
The snapshot didn't go through.
Can you put it on some website and give us the URL ?
Cheers
On May 22, 2014, at 7:38 PM, jingych jing...@neusoft.com wrote:
Hi, JM everyone!
Thanks for your reply!
My java client, use the HTable#put(Put) method with 2M buffer to commit rows.
While
Sorry. This is a image. I do not know where can i upload it.
Could you please give me some suggestions.
jingych
From: Ted Yu
Date: 2014-05-23 10:45
To: user@hbase.apache.org
CC: Jean-Marc Spaggiari; user
Subject: Re: CDH5 hbase client outofmemory
The snapshot didn't go through.
Can you put
There're many options.
https://*imgur*.com/ is one.
Cheers
On Thu, May 22, 2014 at 7:53 PM, jingych jing...@neusoft.com wrote:
Sorry. This is a image. I do not know where can i upload it.
Could you please give me some suggestions.
jingych
From: Ted Yu
Date: 2014-05-23 10:45
To:
postimage.org is also an option, etc.
2014-05-22 22:59 GMT-04:00 Ted Yu yuzhih...@gmail.com:
There're many options.
https://*imgur*.com/ is one.
Cheers
On Thu, May 22, 2014 at 7:53 PM, jingych jing...@neusoft.com wrote:
Sorry. This is a image. I do not know where can i upload it.
Thanks!
But we have the security policy, I can't access these website %_%
I try some, but doesn't work, i give up ╮(╯▽╰)╭
The post snapshot just monitor the heap size and threads.
It shows that:
The heap size is 512M.
While the client is running, the first 10min, the heap size is stable between
Hi
I'm using hbase0.96.
In hbase web, I see :
Max Heap is 16.0 G
Memstore Size is 208.6 M
Cache Size is 138M
Cache Free is 6.2G
And Used Heap is 10.2 G.
Used Heap is large. I want to know what could be in then Used Heap.
Thanks.
w.r.t. your questions toward the bottom of your email:
CDH HBase is one distribution of Apache HBase. As you can see from the url,
it is based on 0.96.1.1
HDP is another distribution.
There're many projects under Apache Software Foundation, such as YARN,
HDFS, HBase, Phoenix.
You can get more
On regionsever:60030/rs-status#regionMemstoreStats, you should be able to
see how much memstore each region uses.
You can also use ganglia, etc to view the metrics.
See http://hbase.apache.org/book.html#hbase_metrics (15.4.4.3 and 15.4.4.7)
Cheers
On Thu, May 22, 2014 at 8:39 PM, sunweiwei
Because internally deleteColumn with LATEST_TIMESTAMP does a get to find
the exact latest version on which the delete applies we do a get. User
perspective it may not be needed but since a read happens on those blocks
where the latest version of the cell resides we have to know the metric.
Even
26 matches
Mail list logo