Thanks for the tip. I was just wondering what people have done in the past
- do people typically reserve a separate disk for logging activity ?
Thanks
Varun
On Wed, Dec 26, 2012 at 1:13 PM, Stack st...@duboce.net wrote:
On Mon, Dec 24, 2012 at 9:27 AM, Varun Sharma va...@pinterest.com wrote:
Hi hua,
The zookeeper is used by HBase for tow main purpose, one is manging every
region server state, the other is managing --ROOT-- table updated by
HMaster. So most HBase operation will keep touch with zookeeper, the
thrift server is not an exception.
2012/12/27 hua beatls bea...@gmail.com
Yes as you say when the no of rows to be returned is becoming more and
more the latency will be becoming more. seeks within an HFile block is
some what expensive op now. (Not much but still) The new encoding prefix
trie will be a huge bonus here. There the seeks will be flying.. [Ted also
I have not looked at your Jason object but the rest sound good to me.
You might want to implement a test class Wichita creates a Jason, put it in
the table, retrieve it and compare it with the original one...
JM
Le 27 déc. 2012 01:07, varaprasad.bh...@polarisft.com a écrit :
Hi,
For the
Hi guys,
Sadly, My hbase client language is Python, I am using happybase for now
which is based on thrift AFAIK. I know thrift so far is still not
supporting filters, coprocessors (correct me if I am wrong here). Can some
one point me any Jira items I can track the plan/progress if there is one?
Varun,
this really depends on your log rotation and retention policy.
Logs usually pretty big, but if you rotate it once a day(for example) and
remove old logs after, say, 1 week, you probably will not need to have huge
amount of space for it…
You should estimate logs side before making
Looks like there was socket timeout :
java.net.SocketTimeoutException: 6 millis timeout while waiting for
channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/***:39752
remote=***/***:60020]
Have you collected / checked GC log on the server referenced above ?
On 12/28/12 12:14 PM, Ted Yu yuzhih...@gmail.com wrote:
Looks like there was socket timeout :
java.net.SocketTimeoutException: 6 millis timeout while waiting for
channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/***:39752
remote=***/***:60020]
Have you
I was talking about the server which was anonymized:
***/***:60020
Cheers
On Fri, Dec 28, 2012 at 10:41 AM, Baugher,Bryan bryan.baug...@cerner.comwrote:
On 12/28/12 12:14 PM, Ted Yu yuzhih...@gmail.com wrote:
Looks like there was socket timeout :
java.net.SocketTimeoutException: 6
We've seen this at times too and would be interested to know what causes
it. Until you get a better answer, one thing we found helps is to move the
region via the hbase shell. For us clients would be able to scan the
region once it has moved.
On Fri, Dec 28, 2012 at 2:00 AM, satish verma
I think you can take a look at your row-key design and evenly
distribute your data in your cluster, as you mentioned even if you
added more nodes, there was no improvement of performance. Maybe you
have a node who is a hot spot, and the other nodes have no work to do.
regards!
Yong
On Tue, Dec
I believe that is one of our region servers which I will have to wait till
tomorrow to check gc logs.
On 12/28/12 12:45 PM, Ted Yu yuzhih...@gmail.com wrote:
I was talking about the server which was anonymized:
***/***:60020
Cheers
On Fri, Dec 28, 2012 at 10:41 AM, Baugher,Bryan
12 matches
Mail list logo