Thanks Yifeng. Well thought input :) and it works.
On Sun, Apr 29, 2012 at 1:43 PM, Yifeng Jiang uprushwo...@gmail.com wrote:
Hi Sambit,
Are you specifying a local file system path on the command line?
Before invoking importtsv, you will need to copy your tsv files to HDFS at
first.
Hi,
We are still experiencing 40-60 minutes of task failure before our
HBaseStorage jobs run but we think we've narrowed the problem down to a
specific zookeeper issue.
The HBaseStorage map task only works when it lands on a machine that
actually is running zookeeper server as part of the
It means that java run time can't find
org/apache/hadoop/hbase/filter/FilterBase class. You have to add the
hbase.jar in your classpath.
regards!
Yong
On Wed, May 2, 2012 at 12:12 PM, cldo datk...@gmail.com wrote:
i want to custom filter hbase.
i created jar file by eclipse, copy to sever
You have accidentally used ; as a path separator, you should use: :
(without the quotes)
try this:
export HBASE_CLASSPATH=/cldo/hadoop/conf*:*/cldo/customfilter.jar
On Wed, May 2, 2012 at 2:45 PM, yonghu yongyong...@gmail.com wrote:
It means that java run time can't find
We had shifted our machines from one location to another location.
After bringing up the system, when we started our Hadoop and HBase clusters,
we realized that clocks were out of sync and region servers would not start up.
After syncing the clock, when we restarted the cluster, we are
May be after half an hour the Timeout monitor may try to assign it. It is an
internal thread that the system uses.
But I still doubt the zookeeper data. This problem mainly happens if the
zookeeper node for META is still available.
Regards
Ram
-Original Message-
From: Srikanth P.
Is there a way to add multiple filter list to a scan object.
ie,
scan.setFilter(Filterlist1);
scan.setFilter(Filterlist2);
scan.setFilter(Filterlist3);
Where Filterlist1,Filterlist2 are objects of FilterList and it contains
combination of filters like ColumnPrefixFilter,ValueFilter.
Or in
Hi,
A FilterList is what you need.
You can use only once scan.setFilter(), but you can pass it a filterList
which contains numerous filters (and other filterLists)...
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FilterList.html
Le mercredi 02 mai 2012 à 13:22 +, Davis a
FilterList list = new FilterList(FilterList.Operator.MUST_PASS_ONE);
SingleColumnValueFilter filter1 = new SingleColumnValueFilter(
cf,
column,
CompareOp.EQUAL,
Bytes.toBytes(my value)
);
list.add(filter1);
SingleColumnValueFilter filter2 = new
Guys i understand this... But my question i want to add multiple filterlists...
Something like this
FilterList list1 = new FilterList(FilterList.Operator.MUST_PASS_ONE);
SingleColumnValueFilter filter1 = new SingleColumnValueFilter(
cf,
column,
CompareOp.EQUAL,
You can add a filter list to a filter list.
FilterList fl=new FilterList();
FilterList fl1=new FilterList();
fl1.addFilter(fl);
On Wed, May 2, 2012 at 7:27 PM, Davis davisabra...@gmail.com wrote:
Guys i understand this... But my question i want
Thanks for the suggestions.
I restarted the cluster again. Now things are fine.
I think the mistake I might have done earlier is that I did not restart HDFS
nodes (name node + datanodes), but just the HBase nodes (master, zookeeper and
region servers).
Now, I restarted all, after cleaning the
FilterList listOfFilters = new FilterList (FilterList.Operator.MUST_PASS_ALL);
FilterList listOfFilters1 = new FilterList (FilterList.Operator.MUST_PASS_ALL);
FilterList listOfFilters2 = new FilterList (FilterList.Operator.MUST_PASS_ALL);
SingleColumnValueFilter SingleFilter1 = new
I'm using the script from https://issues.apache.org/jira/browse/HBASE-1621 to
merge some regions on a test cluster running vanilla apache hadoop - 1.0.2 and
hbase - 0.92.1, but not having any luck. After updating a few api calls the
script now runs to completion, but it breaks the tables I'm
Hello --
I've got a problem where the RegionServers try to connect to localhost for
the Master, because that's what's being reported to them by the ZooKeeper.
Since they are not on the same machine, the requests fail:
2012-05-01 18:01:27,111 INFO
After looking at this again, the data is intact (I can count/scan all rows),
and the new regions are loaded on the different region servers, but the web UI
doesn't show any regions for the table and warnings appear in the log:
2012-05-02 14:51:43,847 WARN
I think the answer to this is no, but I am hoping someone with more
experience can confirm this… we are on hbase 0.90.4 (from cdh3u2). Some of our
storefiles have grown into the 3-4GB range (we have 100GB max region size).
Ignoring compactions, do large storefiles like this have a negative
No, it's a direct read using a block index which is in memory.
J-D
On Wed, May 2, 2012 at 2:29 PM, Paul Mackles pmack...@adobe.com wrote:
I think the answer to this is no, but I am hoping someone with more
experience can confirm this… we are on hbase 0.90.4 (from cdh3u2). Some of
our
re: with lackluster performance for random reads
You want to be on CDH3u3 for sure if you want to boost random read
performance.
On 5/2/12 5:29 PM, Paul Mackles pmack...@adobe.com wrote:
I think the answer to this is no, but I am hoping someone with more
experience can confirm thisŠ we are
Thanks for the tip Doug. Does that boost come largely from the HDFS
improvements?
On 5/2/12 7:52 PM, Doug Meil doug.m...@explorysmedical.com wrote:
re: with lackluster performance for random reads
You want to be on CDH3u3 for sure if you want to boost random read
performance.
On 5/2/12
ok I got a MR job I am trying to import into hbase it works with small input
loads with in secs and the command line returns
when Iadd more input to the map reduce job the completebulkload hangs on
the command line never returning.
when I run a large completebulkload it keeps trying to copy
On Wed, May 2, 2012 at 6:00 PM, Paul Mackles pmack...@adobe.com wrote:
Thanks for the tip Doug. Does that boost come largely from the HDFS
improvements?
Yeah, unless you install 0.92.x hbase (or if you want more
improvement, install 0.94.x RC).
St.Ack
On Wed, May 2, 2012 at 1:31 PM, Karl Kuntz kku...@tradebotsystems.com wrote:
After looking at this again, the data is intact (I can count/scan all rows),
and the new regions are loaded on the different region servers, but the web
UI doesn't show any regions for the table and warnings appear
Looks like it was a time out issue I seen the bonding log messages more then
once so.
Does anyone know what the timeout setting name is on completebulkload?
Billy
Billy Pearson sa...@pearsonwholesale.com
wrote in message news:jnslmk$9ic$1...@dough.gmane.org...
ok I got a MR job I am trying
24 matches
Mail list logo