Hi
Are you using any coprocessors? Can you see how many store files are
created?
The no of blocks getting cached will give you an idea too..
Regards
Ram
On Tue, Oct 30, 2012 at 4:25 AM, Jeff Whiting je...@qualtrics.com wrote:
We have 6 region server given 10G of memory for hbase. Each
Hi
Can you see if your hTable instances are shared across different threads.
that could be the reason for you null pointer excepiton.
Regards
Ram
On Tue, Oct 30, 2012 at 7:22 AM, xkwang bruce bruce.xkwa...@gmail.comwrote:
Hi,苏铖.
U may need presplit you htable when the load is heavy or
/30/2012 6:21 PM, Jeff Whiting wrote:
We have no coprossesors. We are running replication from this cluster to
another one.
What is the best way to see how many store files we have? Or checking on
the block cache?
~Jeff
On 10/30/2012 12:43 AM, ramkrishna vasudevan wrote:
Hi
Are you
Can multiple region servers runs on one real machine?
(I guess not though)
No.. Every RS runs in different physical machines.
max.file.size actually applies for region. Suppose you create a table then
insert data for 20G that will get explicitly splitted into further regions.
Yes all 60G of data
Can you try restarting the cluster i mean the master and RS.
Also if this things persists try to clear the zk data and restart.
Regards
Ram
On Thu, Nov 1, 2012 at 2:46 PM, Cheng Su scarcer...@gmail.com wrote:
Sorry, my mistake. Ignore about the max store size of a single CF please.
m(_ _)m
Nice...Thanks for your follow up.
Regards
Ram
On Fri, Nov 2, 2012 at 11:43 PM, Dan Brodsky danbrod...@gmail.com wrote:
Ram,
I wanted to follow up with you since you helped me with your below comment.
It turns out that the ZK configuration files somehow got changed (reverted
to their
There is no interface which says that the major compaction is completed.
But you can see that major compaction is in progress from the web UI.
Sorry if am wrong here.
Regards
Ram
On Thu, Nov 8, 2012 at 11:38 AM, yun peng pengyunm...@gmail.com wrote:
Hi, All,
I want to measure the duration of
Sorry i am not very sure if there is any link between the coprocessor and
region not online.
Pls check if your META region is online.
Regards
ram
On Sat, Nov 10, 2012 at 8:37 PM, yonghu yongyong...@gmail.com wrote:
Dear All,
I used hbase 0.94.1 and implemented the test example of WAL trigger
Check your UI. Does it show any regions in transition?
Can you try doing kill -9 and restart the region server.
Regards
Ram
On Thu, Nov 15, 2012 at 5:48 PM, yutoo yanio yutoo.ya...@gmail.com wrote:
hi every body
in my cluster sometime a region server not responsed and we give exception
Hi Chris
So you mean that you have explicitly set the timestamp to 0 for the column
which you did not want to delete?
Regards
Ram
On Mon, Nov 19, 2012 at 1:44 AM, Chris Larsen clar...@euphoriaaudio.comwrote:
Hello, I was going nuts over an issue where I would try to delete a single
column
) but I was wondering
if a timestamp of 0 is legal and if it isn't, maybe HBase should
kick back errors if someone tries it.
-Original Message-
From: ramkrishna vasudevan [mailto:ramkrishna.s.vasude...@gmail.com]
Sent: Sunday, November 18, 2012 11:31 PM
To: user@hbase.apache.org
Hotspotting is bound to happen until the region starts splitting and gets
assigned to diff region servers.
Regards
Ram
On Wed, Nov 21, 2012 at 12:49 PM, Ajay Bhosle
ajay.bho...@relianceada.comwrote:
Hi,
I am inserting some data in hbase which is getting hot spotted in a
particular server.
-with-sequential-keys/
HTH
Regards,
Mohammad Tariq
On Wed, Nov 21, 2012 at 1:49 PM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Hotspotting is bound to happen until the region starts splitting and gets
assigned to diff region servers.
Regards
Ram
On Wed
Oops nice case But there is one alternate way ...
There are some hooks provided while log splitting happens.
Use the preWALRestore and postWALRestore hooks. But you need to know that
the LogEdit that comes for the Put is the one which was already deleted.
With this you can achieve what you
Which version of hbase? Because coprocessors are available in 92 and above
versions only.
Regards
Ram
On Wed, Nov 21, 2012 at 8:50 PM, Bing Jiang jiangbinglo...@gmail.comwrote:
we need to confirm that put must be safe,but deletes must be quick and
low-latency.
On Nov 21, 2012 11:10 PM,
Kevin, Yes I agree with no of WALs kept. Lowering it will allow frequent
flushes and that should lower the occurence of this issue.
The more the flushes happen the lesser the probability of this issue
occurence.
But still if a RS crashes after doing the WAL less deletes then the puts
are bound
Sorry Bing.. am not much clear as what you suggest
'One idea occurs to me why not check or restore wal when compaction
executes. If it does, hbase can drop some unused hlog'.
Could you be more clear? Are you trying to read the WAL while compaction
is going on?
Regards
Ram
On Thu, Nov 22, 2012
Hi Yun,
Are you trying to disable Minor compactions?
Regards
Ram
On Fri, Nov 23, 2012 at 5:20 AM, yun peng pengyunm...@gmail.com wrote:
Hi, I want to disable automatic compaction in HBase. Currently I used
following configurations in conf/hbase-site.xml
The problem is compaction does not
Setting up a local system HBase is frequently asked in the mailing list :).
Regards
Ram
On Mon, Nov 26, 2012 at 10:01 PM, Alok Singh Mahor alokma...@gmail.comwrote:
thank you :)
On Mon, Nov 26, 2012 at 4:28 PM, Mohammad Tariq donta...@gmail.com
wrote:
You are welcome Alok :)
Yes,
Can you paste the master logs and RS logs.. am sure that there should have
been some errors in them.. That is why it is not able to locate the META
Regards
Ram
On Tue, Nov 27, 2012 at 7:51 PM, shyam kumar lakshyam.sh...@gmail.comwrote:
HI,
Ya am able to see the table and table description in
As far as i see altering the table with the new columnfamily should be
easier.
- disable the table
- Issue modify table command with the new col family.
- run a compaction.
Now after this when you start doing your puts, they should be in alignment
with the new schema defined for the table. You
, Nov 28, 2012 at 10:24 AM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
As far as i see altering the table with the new columnfamily should be
easier.
- disable the table
- Issue modify table command with the new col family.
- run a compaction.
Now after this when you
if we change the column family, do we need to change all the Java
programs which uses the old column family for a field?
regards,
Rams
On Wed, Nov 28, 2012 at 11:41 AM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
I am afraid it has to be changed...Because for your puts
I see .. Thanks Stack..
Regards
Ram
On Thu, Nov 29, 2012 at 1:19 AM, yonghu yongyong...@gmail.com wrote:
I like the illustration of Stack.
regards!
Yong
On Wed, Nov 28, 2012 at 6:56 PM, Stack st...@duboce.net wrote:
On Wed, Nov 28, 2012 at 6:40 AM, matan ma...@cloudaloe.org wrote:
Hi
Pls check if this issue is similar to HBASE-5897. It is fixed in 0.92.2 as
i see from the fix versions.
Regards
Ram
On Fri, Nov 30, 2012 at 1:13 PM, yonghu yongyong...@gmail.com wrote:
Dear all,
I have two tables, named test and tracking. For every put in the test
table, I defined a
RS
threads doing.
May be can get some clues from that.
-Anoop-
On Fri, Nov 30, 2012 at 2:04 PM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Hi
Pls check if this issue is similar to HBASE-5897. It is fixed in 0.92.2
as
i see from the fix versions.
Regards
Hi Jean
Incase of Puts and Scans if the split is in progress it will retry.
Suppose before the client gets to know about the split, still if the put
is directed with the parent region row key, internally HBase directs it to
the correct daughter region.
But i am not clear in what you mean by
Hi
Actually you can try using HTable.getRegionLocations(). But you need to
pass the table name for it.
Regards
Ram
On Tue, Dec 4, 2012 at 6:34 AM, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:
Hi,
I'm wondering, what's the best way to know which RegionServer a region
is hosted on.
Ok, what i think can be done is try writing a custom filter like
PreFixFilter and use a Comparator that compares long.
In case of SCVF it has a comparator passed to it. So we can implement a
comparator that compares long and pass it to the constructor of SCVF.
Hope this helps.
Regards
Ram
On
HBaseAdmin has a flush() api that accepts the table name. This will ensure
that the data in memory is flushed.
Regards
Ram
On Tue, Dec 4, 2012 at 8:39 PM, Shengjie Min kelvin@gmail.com wrote:
According to Hbase design, Memstore flush is happened automatically behind
the theme when it
Generally if the data is not used after some short duration people tend to
go with individual tables and then drop the table itself..
Regards
Ram
On Thu, Dec 6, 2012 at 10:05 AM, Anoop Sam John anoo...@huawei.com wrote:
Hi Manoj
If I read you correctly, I think you want to aggregate
Is block cache ON? Check out HBASe-5898?
Regards
Ram
On Thu, Dec 6, 2012 at 9:55 AM, Anoop Sam John anoo...@huawei.com wrote:
is the META table cached just like other tables
Yes Varun I think so.
-Anoop-
From: Varun Sharma [va...@pinterest.com]
at 10:04 PM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Is block cache ON? Check out HBASe-5898?
Regards
Ram
On Thu, Dec 6, 2012 at 9:55 AM, Anoop Sam John anoo...@huawei.com
wrote:
is the META table cached just like other tables
Yes Varun I think so
the issue repeatedly - it looks like something is probably wrong
with the locking mechanism when you have have higher number of IPC handlers
like 200.
On Thu, Dec 6, 2012 at 2:59 AM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Actually when we observed that our block cache
Nice explanation Anoop. :)
No prefix filters will be needed to query the secondary index table. As
Anoop told
StartkeyIndexName- Static part
Value- Main table rowkey value
ActualrowkeyActual rowkey.
So you just need to set a start row with StartKeyIndexNameValue...
This will give you the Actual
So you find that scan with a filter and count with the same filter is
giving you different results?
Regards
Ram
On Mon, Dec 24, 2012 at 8:33 PM, Dalia Sobhy dalia.mohso...@hotmail.comwrote:
Dear all,
I have 50,000 row with diagnosis qualifier = cardiac, and another 50,000
rows with renal.
Okie, seeing the shell script and the code I feel that while you use this
counter, the user's filter is not taken into account.
It adds a FirstKeyOnlyFilter and proceeds with the scan. :(.
Regards
Ram
On Mon, Dec 24, 2012 at 10:11 PM, Dalia Sobhy dalia.mohso...@hotmail.comwrote:
yeah scan
Hi
You could have custom filter implemented which is similar to
FirstKeyOnlyfilter.
Implement the filterKeyValue method such that it should match your keyvalue
(the specific qualifier that you are looking for).
Deploy it in your cluster. It should work.
Regards
Ram
On Mon, Dec 24, 2012 at
What is the exception you are getting?
When you got no of regions as 3 how many Region servers did you have?
Regards
Ram
On Tue, Dec 25, 2012 at 2:01 PM, Saurabh Dutta
saurabh.du...@impetus.co.inwrote:
Hi,
I'm trying to create regions using startkey, endkey and number of
regions.
I'm
Oh, this seems interesting.
You have logs before this which says for which region the split started?
BTW are you using any custom split policy.
Regards
Ram
On Tue, Dec 25, 2012 at 7:43 PM, yuzhih...@gmail.com wrote:
Can you provide more log around the time this happened ?
Looks like the
...@spaggiari.org wrote:
By looking at the code, we have:
Not splittable if we find a reference store file present in the store...
Not really sure to understand what that mean :(
2012/12/25, ramkrishna vasudevan ramkrishna.s.vasude...@gmail.com:
Oh, this seems interesting.
You have logs before
@Dalia
I think the aggregation client should work with what you have passed. What
i meant in the previous mail was with table.count() and now with
AggregationClient.
{code}
if (scan.getFilter() == null qualifier == null)
scan.setFilter(new FirstKeyOnlyFilter());
{code}
So as you have
Dalia,
I tried out this eg,
{code}
private static final byte[] TEST_TABLE = Bytes.toBytes(TestTable);
private static final byte[] TEST_FAMILY = Bytes.toBytes(TestFamily);
private static final byte[] TEST_QUALIFIER =
Bytes.toBytes(TestQualifier);
private static final byte[] TEST_MULTI_CQ
There is also something called FuzzyRowFilter. It should help you out.
Regards
Ram
On Thu, Dec 27, 2012 at 4:13 PM, Mohammad Tariq donta...@gmail.com wrote:
Try RowFilter with RegexStringComparator
Filter filter = new RowFilter(CompareFilter.CompareOp.EQUAL,new
As per the design there is no get() operation at all. Incase of equals
query nothing is cached in memory.
For Range may be we need to cache some intermediate result.
Regards
Ram
On Thu, Dec 27, 2012 at 9:24 PM, Anoop John anoop.hb...@gmail.com wrote:
how the massive number of get() is going
Oh...Oops..
Regards
Ram
On Wed, Jan 2, 2013 at 3:14 AM, Dalia Sobhy dalia.mohso...@hotmail.comwrote:
Thanks Ram,
Issue is resolved i forgot to add
scan.addFilter(fliterlist);
Thats why it was not filtering !!!
Date: Wed, 26 Dec 2012 21:11:32 +0530
Subject: Re: Hbase Count Aggregate
Hi Dalia
The Namenode SafeMode Exception should not be related with number of region
servers.
Any turn off safemode and try restarting your cluster. May be internally
namenode is not able to locate some blocks.
That needs more investigation.
Regards
Ram
On Thu, Jan 3, 2013 at 8:13 AM, varun
As far as i can see its more related to using the coprocessor framework in
this soln that helps us in a great way to avoid unnecessary RPC calls when
we go with Region level indexing.
Regards
Ram
On Wed, Jan 9, 2013 at 8:52 AM, Anoop Sam John anoo...@huawei.com wrote:
Totally agree with Lars.
Hi Jean
The region transition states are not exposed to the end user.
You can only know if the table is enabled, enabling, disabled or disabling.
But if you want to do in any of your testcases then YES it is possible.
Regards
Ram
On Thu, Jan 10, 2013 at 7:42 PM, Jean-Marc Spaggiari
Yes definitely you will get back the data.
Please read the HBase Book that explains things in detail.
http://hbase.apache.org/book.html.
Regards
Ram
On Thu, Jan 10, 2013 at 8:48 PM, Panshul Gupta panshul...@gmail.com wrote:
Hello,
I was wondering if it is possible that I have data stored
In Anoop's soln its basicallly the put happens directly on the index region
rather than doing a put thro HTable.
Regards
Ram
On Sun, Jan 13, 2013 at 9:28 AM, Andrew Purtell andrew.purt...@gmail.comwrote:
Yes, especially if the cross region communication is in process.
On Jan 12, 2013, at
Hive is more for batch and HBase is for more of real time data.
Regards
Ram
On Thu, Jan 17, 2013 at 10:30 PM, Anoop John anoop.hb...@gmail.com wrote:
In case of Hive data insertion means placing the file under table path in
HDFS. HBase need to read the data and convert it into its format.
On Mon, Jan 21, 2013 at 1:46 PM, Eugeny Morozov
emoro...@griddynamics.comwrote:
I do not
understand why with FuzzyFilter it goes from one region to another until it
stops at the value. I suppose if scanning process has started at once on
all regions
Scanning process does not start parallely
Hi Jean
Before replying as to what i know, region splits can be configured too.
Ok, now on how the split happens
- You can explicity ask the region to get splitted on a specific row key.
If you know that splitting on that rowkey will yield you almost equal
region sizes.
- Now when HBase tries
the default behaviour and split
based on the row number instead of the midkey, can I hook somewhere?
Or will I have to disable the default split (by setting the
maxfilesize to something like 20GB) and run a job to split the regions
manually?
Thanks,
JM
2013/1/22, ramkrishna vasudevan
Hi Varun
Generally when the memstore size has reached for the region, the writes are
not blocked except for a fraction of a time till a new memstore is created
so that new puts can go into the newly created memstore.
But if you have a heavy flow of puts happening and that is increasing the
size
This morning, I have some very big regions, still over the 100MB, and
some very small. And the big regions are at least hundred times bigger
than the small one.
The region that was bigger than 100 MB (much bigger) what was the data in
them. Were there any hefty rows in them. Check them.
Oops, which version is of HBase is this?
If the problem persists can you restart the cluster. Sounds bad but you
may have to do that. :(
Regards
Ram
On Thu, Jan 24, 2013 at 3:46 PM, hua beatls bea...@gmail.com wrote:
HI,
i have a table in 'transition' state, which couldn't be 'disable'
, with a different heading.
anyways, do as Ram sir has said.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Thu, Jan 24, 2013 at 3:54 PM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Oops, which version is of HBase is this?
If the problem
@Toby
If you wish to go the specified page you need to set the start row that
needs to come as part of that page.
So what i feel is implement a custom page filter and keep doing next() and
display only those records that suits the page you clicked.
and send them back to the client. Anyway the
As a POC, just try to load the data into another table that has the rowkey
that has the original row's value.
Try to scan the index table first and then get the main table row key.
First this should help, later can make this more better by using
coprocessors.
Regards
Ram
On Mon, Jan 28, 2013 at
Ok Ted. I agree it should be available in 0.96. I too was suggesting a
work around in my mail similar to what Lars said.
Thanks Ted.
Regards
Ram
On Wed, Jan 30, 2013 at 8:23 AM, Ted Yu yuzhih...@gmail.com wrote:
I have checked in the following into trunk:
HBASE-7712 Pass ScanType into
Logs of hbase you mean the normal logging or the WAL logs. Sorry am not
getting your question here.
WAL trigger is for the WAL logs.
Regards
Ram
On Sat, Feb 2, 2013 at 1:31 PM, yonghu yongyong...@gmail.com wrote:
Hello,
For some reasons, I need to analyze the log of hbase. However, the log
, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Logs of hbase you mean the normal logging or the WAL logs. Sorry am not
getting your question here.
WAL trigger is for the WAL logs.
Regards
Ram
On Sat, Feb 2, 2013 at 1:31 PM, yonghu yongyong...@gmail.com wrote:
Hello
Hi
If the put is successful and stored to memstore then it should be a
successful put.
What type of exception did the Server throw?
Regards
Ram
On Mon, Feb 4, 2013 at 12:50 PM, Mesika, Asaf asaf.mes...@gmail.com wrote:
Hi,
Can the following scenario occur?
Create a Put for (rk1, cf1, cq1,
information. By the way, can you give me an
example about how I can define the customized logcleaner class.
regards!
Yong
On Sat, Feb 2, 2013 at 1:54 PM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Ya i think that is the property that controls the archiving of the logs.
So
Hearty congratulations Deva!!!
Regards
Ram
On Thu, Feb 7, 2013 at 10:53 AM, Azuryy Yu azury...@gmail.com wrote:
congratulation, Devaraj !
On Thu, Feb 7, 2013 at 1:19 PM, Ted Yu yuzhih...@gmail.com wrote:
Hi,
We've brought in one new Apache HBase Committer: Devaraj Das.
On behalf of
I would think that the ideal suggested by Tianyang Chang is good. Though
we may have to disable the table.
Regards
Ram
On Fri, Feb 8, 2013 at 12:46 AM, Michael Segel michael_se...@hotmail.comwrote:
I think its more akin of the typical 'truncate' command.
Actually, if someone wanted to
What is the version that you are using? You can run the tool called HBCK
which should help you.
But before running this what operation were you trying to make? DISABLED
and then ENABLED a table or it was just normal region assignments while
creating table or
region balancing?
Regard
Ram
On Fri,
Yes, RIT states were bit confusing for the end user. In the latest version
lot of efforts have gone into solve this problems. Hope for the best in the
future :)
Regards
Ram
On Fri, Feb 8, 2013 at 12:38 PM, Samir Ahmic ahmic.sa...@gmail.com wrote:
Hi, Kiran
Welcome to beautiful world of
To your question regarding if you can write a mapper that sends only the
columns that you need:
Yes of course you can do it.
See the example in Importer.java. It shows you how a simple copytable can
be implemented. Use a similar way but before creating the new put for the
new table, just check
What do you see in the thread dump? May be HBASE-7336 deals with scans
hitting the same block of data. But i see from your mail that the scans are
independent of each other and they scan different data but in the same
Region.
Regards
Ram
On Sat, Feb 9, 2013 at 11:22 AM, James Taylor
Hi David,
Have you changed anything on the configurations related to compactions?
If there are more store files created and if the compactions are not run
frequently we end up in this problem. Atleast there will be a consistent
increase in the file handler count.
Could you run compactions
, 2013 at 4:58 AM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Hi David,
Have you changed anything on the configurations related to compactions?
If there are more store files created and if the compactions are not run
frequently we end up in this problem. Atleast
If the scan is happening on the same region then going for Scan would be a
better option.
Regards
RAm
On Mon, Feb 18, 2013 at 4:26 PM, Nicolas Liochon nkey...@gmail.com wrote:
i) Yes, or, at least, of often yes.
II) You're right. It's difficult to guess how much it would improve the
Ideally the ROOT table should be reassigned once the RS carrying ROOT goes
down. This should happen automatically.
May be what does your logs say. That would give us an insight.
Before that if you can restart your master it may solve this problem. Even
then if it persists try to delete the zk
The Import.java in the package org.apache.hadoop.hbase.mapreduce. This
comes along with the src code.
Have you tried the option of using the SingleColumnValueFilter. One thing
you need to note that the if you are going for a search on the entire table
then all the regions has to be scanned but
The Import.java in the package org.apache.hadoop.hbase.mapreduce. This
comes along with the src code.
Have you tried the option of using the SingleColumnValueFilter. One thing
you need to note that the if you are going for a search on the entire table
then all the regions has to be scanned but
Hi JM
As per the fix versions it is fixed even in 0.90 version. So it is
available in the version that Pranav is currently using.
Regards
Ram
On Thu, Feb 21, 2013 at 8:58 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi Pranav,
I don't think this feature is available on the
Congrats Sergey.
Regards
Ram
On Sat, Feb 23, 2013 at 7:17 AM, Marcos Ortiz mlor...@uci.cu wrote:
Congratulations, Sergey.
On 02/22/2013 04:39 PM, Ted Yu wrote:
Hi,
Sergey has 51 issues under his name:
*
https://issues.apache.org/**jira/issues/?jql=project%20%**
Thanks for the info Dave. Others will be benefited too.
Regards
Ram
On Mon, Feb 25, 2013 at 7:25 PM, Dave Latham lat...@davelink.net wrote:
We recently saw some of these warnings in a cluster we were setting up.
These warnings mean there are rows in the META table that are missing one
of
Can you try the same with Java client.. ?
Raise a bug JM.
Regards
Ram
On Mon, Feb 25, 2013 at 8:54 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi,
When I run this in the shell:
scan 'entry' , {STARTROW = 'z', LIMIT = 10}
I get rows starting with z. That's fine.
But when I
This could be a case where the RPC server response got closed and we try to
act on that which got nullified. Remember some similar issues fixed on
this part. Don't remember whether it is 0.92 version.
On Sat, Mar 2, 2013 at 11:14 PM, Anoop John anoop.hb...@gmail.com wrote:
Is this really
Yes Yong is right
/**
* Delete the specified version of the specified column.
* @param family family name
* @param qualifier column qualifier
* @param timestamp version timestamp
* @return this for invocation chaining
*/
@SuppressWarnings(unchecked)
public Delete
Just curious to know,
What type of hooks have been provided for Secondary index? Mainly wrt
HBase?
Regards
Ram
On Fri, Mar 8, 2013 at 7:25 PM, Amresh Kumar Singh
amresh.si...@impetus.co.in wrote:
Hi All,
We are happy to announce the release of Kundera 2.4.
Kundera is a JPA 2.0 compliant,
I think it is with your GC config. What is your heap size? What is the
data that you pump in and how much is the block cache size?
Regards
Ram
On Fri, Mar 8, 2013 at 9:31 PM, Ted Yu yuzhih...@gmail.com wrote:
0.94 currently doesn't support hadoop 2.0
Can you deploy hadoop 1.1.1 instead ?
) Configure kundera.indexer.class property in persistence.xml
2) Configured kundera.indexer.class must implement Indexer interface.
-Vivek
From: ramkrishna vasudevan [ramkrishna.s.vasude...@gmail.com]
Sent: 08 March 2013 19:37
To: user@hbase.apache.org
As note of caution just don have two empty qualifiers in the same CF but u
can still have empty qualifiers in diff CFs.
Regards
Ram
On Mon, Mar 11, 2013 at 10:20 AM, Anoop Sam John anoo...@huawei.com wrote:
can we have column name dob under column family F1 F2?
Just fine.. Go ahead.. :)
Can you do one thing.
When you stop the services do this way
- Stop the RS
- Then stop the master.
That is always better i feel.
REgards
Ram
On Fri, Mar 15, 2013 at 6:49 AM, Time Less timelessn...@gmail.com wrote:
We have a 15-node HBase cluster with RS on same nodes as HDFS DN. We do a
HBase shipping a generic framework for different interfaces is needed for
ease of use for the users. +1 on the idea.
Getting out the correct result for float values, positive and negative
integers had to be taken care by the users or by using some wrappers.
This will help to solve that problem
The way that Anoop has suggested will make you issue a scan command from
the client but the CP hook will tel you what are the KVs in the memstore
based on that particular scan's current read point.
I think to scan the HFiles directly are you using MapReduce? Or you are
directly reading the
I remember there was another user too who got the same issue. He thought
the minor compaction to be major compaction and was saying that the setting
that we provide to disable the majorcompaction was not working/not taking
effect.
Regards
Ram
On Thu, Mar 21, 2013 at 11:33 PM, Jean-Daniel Cryans
My question is this. If a compaction fails due to a regionserver loss
mid-compaction, does the regionserver that picks up the region continue
where the first left off? Or does it have to start from scratch?
- The answer to this is, it works from the beginning again.
Regards
Ram
On Mon, Mar 25,
Hi Pankaj
Is it possible for you to profile the RS when this happens? Either may be
like the Thrift adds an overhead or it should be some where the code is
spending more time.
As you said there may be a slight decrease in performance of the put
because now more values has to go in but should
Wha is the rate at which you are flushing? Frequent flushes will cause more
files and compaction may happen frequently but with lesser time.
If the flush size is increased to a bigger value then you will end up more
time in the compaction because the entire file has to be read and rewritten.
Interesting. Need to check this.
May be we should configure different names for the local output directory
for each job. By any chance both jobs are writing to the same path?
Regards
Ram
On Wed, Mar 27, 2013 at 6:44 AM, GuoWei wei@wbkit.com wrote:
Dear JM,
It's correct.
The Hbase
Could you give us some more insights on this?
So you mean when you set the row key as 'azzzaaa', though this row does not
exist, the scanner returns some other row? Or it is giving you a row that
does not exist?
Or you mean it is doing a full table scan?
Which version of HBase and what type of
Same question, same time :)
Regards
Ram
On Thu, Mar 28, 2013 at 9:53 AM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Could you give us some more insights on this?
So you mean when you set the row key as 'azzzaaa', though this row does
not exist, the scanner returns some
:23 PM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Same question, same time :)
Regards
Ram
On Thu, Mar 28, 2013 at 9:53 AM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Could you give us some more
HLogs are for recovery.
HLogs are mainly archived by HBase. And the current live HLogs are
periodically rolled.
hbase.regionserver.logroll.period
hbase.master.logcleaner.ttl
hbase.master.logcleaner.plugins
Regards
Ram
On Fri, Mar 29, 2013 at 12:16 PM, Rishabh Agrawal
1 - 100 of 330 matches
Mail list logo