ime Environment (build 1.6.0_10-b33)
Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode)
Do you know of a specific bug # in the JDK bug database that
addresses this
?
Cheers,
Stefan
From: Chris Collins
Reply-To:
Date: Fri, 8 May 2009 20:34:21 -0700
To: "core-user@hadoop.apac
Stefan, there was a nasty memory leak in in 1.6.x before 1.6 10. It
manifested itself during major GC. We saw this on linux and solaris
and dramatically improved with an upgrade.
C
On May 8, 2009, at 6:12 PM, Stefan Will wrote:
Hi,
I just ran into something rather scary: One of my datano
a couple of years back we did a lot of experimentation between sun's
vm and jrocket. We had initially assumed that jrocket was going to
scream since thats what the press were saying. In short, what we
discovered was that certain jdk library usage was a little bit faster
with jrocket, but
va version "1.6.0_07"
Java(TM) SE Runtime Environment (build 1.6.0_07-b06)
Java HotSpot(TM) Server VM (build 10.0-b23, mixed mode)
I will try to stress test the memory.
-Sagar
Chris Collins wrote:
Was there anything mentioned as part of the tombstone message about
"problematic frame&q
Was there anything mentioned as part of the tombstone message about
"problematic frame"? What java are you using? There are a few
reasons for SIGBUS errors, one is illegal address alignment, but from
java thats very unlikelythere were some issues with the native zip
library in older v
Consider talking to Doug Cutting. He is playing with the idea of a
variant of JSON, I am sure he would love your help. Specifically he
is looking at a coding scheme that is easy to read, does not duplicate
key names per record and supports file splits.
C
On Nov 1, 2008, at 8:20 PM, Zhou,
Sleepycat has a java edition:
http://www.oracle.com/technology/products/berkeley-db/index.html
I has an "interesting" open source license. If you dont need to ship
it on an install disk your probably good to go with that too.
you could also consider Derby.
C
On Nov 1, 2008, at 7:49 PM, lam
Have you considered Amazon S3? I dont know how secure your
requirements are. There are lots of companies using this for just
offsite data storage and also with EC2.
C
On Jun 17, 2008, at 6:48 PM, Kenneth Miller wrote:
All,
I'm looking for a solution that would allow me to securely us
o commands.
On Jun 11, 2008, at 10:00 PM, [EMAIL PROTECTED] wrote:
This information can be found in
http://hadoop.apache.org/core/docs/current/hdfs_permissions_guide.html
Nicholas
- Original Message ----
From: Chris Collins <[EMAIL PROTECTED]>
To: core-user@hadoop.apache.org
I am also interested about this option, since I will probably be
hacking at such a thing in the next few weeks.
I am also curious if you can run MR jobs within process rather than
launching each time. The scenario is when initialization takes just
way too long for a map reduce shard to be
Thanks Doug, should this be added to the permissions doc or to the
faq? See you in Sonoma.
C
On Jun 11, 2008, at 9:15 PM, Doug Cutting wrote:
Chris Collins wrote:
You are referring to creating a directory in hdfs? Because if I am
user chris and the hdfs only has user foo, then I cant
ieve another emailer holds the answer which was blindly dumb on my
part for not trying, that of adding a user in unix and creating a
group that those users belong to.
Thanks
Chris
On Jun 11, 2008, at 5:36 PM, Allen Wittenauer wrote:
On 6/11/08 5:17 PM, "Chris Collins" <[EM
The finer point to this is that in development you may be logged in as
user x and have a shared hdfs instance that a number of people are
using. In that mode its not practical to sudo as you have all your
development tools setup for userx. hdfs is setup with a single user,
what is the pro
more obvious thing).
Still if anyone has an idea what happened to language id and the carrot2 stuff
inside nutch that would be appreciated.
C
-Original Message-
From: chris collins [mailto:[EMAIL PROTECTED]
Sent: Sat 6/7/2008 10:54 AM
To: core-user@hadoop.apache.org
Subject: Couple of
Sorry in advance if these "challenges" are covered in a document somewhere.
I have setup hadoop on a centos 64 bit Linux box. I have verified that it is
up and running only through seeing the java processes running and that I can
access it from the admin ui.
hadoop version is 1.7.0 but I also
15 matches
Mail list logo