or trying to re-insert returns the above error, both
through the API (using HTable) as well as through the HBase shell.
I tried a restart of Hadoop/HBase to no avail. How do fix this problem?
Any help is appreciated.
Best regards,
Lars
Lars George, CTO
WorldLingo
to run hbase
TRUNK on hadoop 0.15.x).
Further comments inline below:
Lars George wrote:
Hi Stack,
Yes, it happens every time I insert particular rows. Before it would
fail every now and so often, but since now all good rows are
inserted I am stuck with the ones that do not insert. And I am
future? Just asking.
Lars
stack wrote:
Lars George wrote:
Hi Stack,
Can and will do, but does that make the error go away, i.e.
automagically fix it? Or is it broken and nothing can be done about
it now?
Your current install is broke. We could try spending time getting it
back
Hi Stack,
Yes, I read that on Nabble today too, that is why I was asking actually.
What is the timeline for this? I will reply to Jim's message too.
Thanks for pointing that out.
Lars
stack wrote:
stack wrote:
Regards your having to do this again in the near future, hopefully
not...
recreatable if we process them costly again. So I am in need of a
migration path.
For me this is a definitely +1 for a migration tool.
Sorry to be a hassle like this. :\
Lars
Lars George, CTO
WorldLingo
Jim Kellerman wrote:
Do you have data stored in HBase that you cannot recreate
is the one that gets into HDFS.
Let us know how your project goes... :-)
On 1/6/08, Lars George [EMAIL PROTECTED] wrote:
Ted,
In an absolute worst case scenario. I know this is beta and all, but I
start using HBase in a production environment and need to limit downtime
(which is what
Hi Taeho,
Fortunately for us, we don't have a need for storing millions of files in
HDFS just yet. We are adding only a few thousand files a day, so that gives
us a handful of days. And we've been using Hadoop more than a year, and its
reliability has been superb.
Sounds great.
This is
... :-)
On 1/6/08, Lars George [EMAIL PROTECTED] wrote:
Ted,
In an absolute worst case scenario. I know this is beta and all, but I
start using HBase in a production environment and need to limit downtime
(which is what this architecture promises) to minimum - none at all if I
can.
All in all
Hi,
Maybe someone here can help me with a rather noob question. Where do I
have to put my custom jar to run it as a map/reduce job? Anywhere and
then specifying the HADOOP_CLASSPATH variable in hadoop-env.sh?
Also, since I am using the Hadoop API already from our server code, it
seems
,
Xerces etc. I guess it is better if I add everything non-Hadoop into the
job jar's lib directory?
Thanks again for the help,
Lars
Arun C Murthy wrote:
On Mon, Jan 07, 2008 at 08:24:36AM -0800, Lars George wrote:
Hi,
Maybe someone here can help me with a rather noob question. Where do I
).
On 1/7/08 12:06 PM, Lars George [EMAIL PROTECTED] wrote:
Ted,
Means going the HADOOP_CLASSPATH route, ie. creating a separate
directory for those shared jars and then set it once in the
hadoop-env.sh, I think this will work for me too, I am in the process of
setting a separate CONF_DIR
and the column contents:A (for example
row123.contents:A), analyze the data and then write out the result in
row123.contents:B, is that OK to do?
Thanks,
Lars
---
Lars George, CTO
WorldLingo
Hi Stack,
First, if I have 40 servers with about 32 regions per server, what
would I set the mapper and reducers to?
Coarsely, make as many maps as you have total regions (Assuming
TableInputFormat is in the mix; it splits on table regions) and make
the number of reducers equal to the
)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:212)
at
org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2043)
I am an absolute noobie at all of this. Could anyone help me understand
what went wrong, and what I have to do to get around it?
Thanks,
Lars
---
Lars George, CTO
WorldLingo
14 matches
Mail list logo