)
at org.apache.hadoop.dfs.MiniDFSCluster.init(MiniDFSCluster.java:113)
--
B. Regards,
Edward J. Yoon
.
--
B. Regards,
Edward J. Yoon
prerna.dyndns.org,port 22:connection refused
ssh failed for [EMAIL PROTECTED]
Also a warning comes that:id_rsa_gsg-keypair not accesible:No such
file or directory though there is this file.
Thanks
Prerna
--
B. Regards,
Edward J. Yoon
prerna.dyndns.org..
it says connection to port 22 refused
Prerna
On Wed, Apr 16, 2008 at 9:17 PM, Edward J. Yoon [EMAIL PROTECTED] wrote:
I didn't try to run on cygwin, but you may need to install a ssh.
http://pigtail.net/LRP/printsrv/cygwin-sshd.html
On Tue, Apr 15, 2008 at 11:54 PM
, Edward J. Yoon [EMAIL PROTECTED] wrote:
Your ISP may be blocking access to critical ports behind their
routers. Can you access any ports on your router? You may want to try
setting up your router to forward some other port (80?) to your
server's port 22.
Thanks,
Edward.
On Fri
tmpfs 516924 0516924 0% /dev/shm
Thanks.
--
B. Regards,
Edward J. Yoon
what
he's actually going to do with this data or when, but I found it useful.
Jeff
Edward J. Yoon wrote:
Hey Akshar!
Just FYI, See http://www.nabble.com/Django-experts-wanted-td17322054.html
-Edward
On Thu, May 22, 2008 at 6:24 AM, Akshar [EMAIL PROTECTED] wrote:
Interesting!!
BTW
#Example%3A+WordCount+v2.0
Arun
If you could share your experience with me, I would really appreciate it.
Thank you in advance,
/Taeho
--
Best regards,
Edward J. Yoon,
http://blog.udanax.org
Thanks for all advices. :)
Edward
On Fri, Jun 13, 2008 at 3:22 PM, lohit [EMAIL PROTECTED] wrote:
Check RandomWriter.java
look for reporter.setStatus(wrote record + itemCount + ..
- Original Message
From: Edward J. Yoon [EMAIL PROTECTED]
To: core-user
Thanks for all interest.
BTW, I can't handle too many people via private email , Please join this group.
http://groups.google.com/group/hrdfstore
Thanks, Edward
On Wed, Jul 2, 2008 at 3:06 PM, Edward J. Yoon [EMAIL PROTECTED] wrote:
Hello all,
The HRdfStore team looking for a couple more
regards,
Edward J. Yoon,
http://blog.udanax.org
HQL will be integrated to HRdfStore project.
See http://groups.google.com/group/hrdfstore
Thanks,
Edward J. Yoon
On 7/22/08, stack [EMAIL PROTECTED] wrote:
lucio Piccoli wrote:
hi Tho Pham
i have checked the HQL api but the only reference i found was the
org.apache.hadoop.hbase.hql
Thank you for all interest.
BTW, Please subscribe to the Hama developer mailing list instead of
send a mail to [EMAIL PROTECTED]
[EMAIL PROTECTED]
- Edward
On Thu, Jul 17, 2008 at 11:26 AM, Edward J. Yoon [EMAIL PROTECTED] wrote:
Hello all,
The Hama team which is trying to port typical
://jmvidal.cse.sc.edu
University of South Carolina http://www.multiagent.com
--
Best regards,
Edward J. Yoon,
http://blog.udanax.org
That's good. :)
Will this cause bigger problems later on? or should I just ignore it.
I'm not sure, But I guess there is no problem.
Does anyone have some experience with that?
Regards, Edward J. Yoon
On Wed, Jul 23, 2008 at 11:05 PM, Jose Vidal [EMAIL PROTECTED] wrote:
Thanks! that worked
, Key3
Key1, Key4
Key2, Key3
Key2, Key4
Key3, Key4
It would be nice if someone can review my pseudo code of traditional
CF using cosine similarity.
http://wiki.apache.org/hama/TraditionalCollaborativeFiltering
Thanks.
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
Yes, but then, as the i grows, the task to workload ratio gets larger
and larger. Is It Right?
-Edward
On Wed, Aug 13, 2008 at 9:23 PM, Amar Kamat [EMAIL PROTECTED] wrote:
Edward J. Yoon wrote:
Hi communities,
Do you have any idea how to get the pairs of all row key combinations
w/o
Is there another option? or a efficient workload balancing algorithm
for this case? If so, please, let me know.
Thanks, Ed
On Wed, Aug 13, 2008 at 9:55 PM, Edward J. Yoon [EMAIL PROTECTED] wrote:
Yes, but then, as the i grows, the task to workload ratio gets larger
and larger. Is It Right
?
Thanks for the help in advance,
Regards,
thientd
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
that I know of are
DBSlayer,
http://code.nytimes.com/projects/dbslayer
and MySQL Proxy,
http://forge.mysql.com/wiki/MySQL_Proxy
I don't know of any formal comparisons between sharding traditional
database servers and distributed databases like HBase.
-Stuart
--
Best regards, Edward J
. Is this the best way to be loading the
database? What
are some alternatives?
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
? What is the value of io.sort.factor?
Hadoop version?
--
View this message in context:
http://www.nabble.com/OutOfMemory-Error-tp19531174p19545298.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http
-tp19531174p19545298.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
Hi hwang.
See this video -
http://developer.yahoo.com/blogs/hadoop/2008/02/yahoo-worlds-largest-production-hadoop.html
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
On Wed, Sep 24, 2008 at 10:28 AM, 황인환 [EMAIL PROTECTED] wrote:
Hi.
I saw the below text in http
What kind of the real-time app?
On Wed, Sep 24, 2008 at 4:50 AM, Stas Oskin [EMAIL PROTECTED] wrote:
Hi.
Is it possible to use Hadoop for real-time app, in video processing field?
Regards.
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
want them to be more intelligent, read them more fairy tales. (Albert
Einstein)
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
Systems (I) Pvt. Ltd.
J-2, Block GP, Sector V, Salt Lake
Kolkata 700 091, India
Phone: +91 (0)33 23577531/32 x 107
http://www.connectivasystems.com
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
I think you can find that data at this page --
http://developer.yahoo.com/blogs/hadoop/2008/09/scaling_hadoop_to_4000_nodes_a.html
/Edward J. Yoon
On Mon, Oct 6, 2008 at 11:30 AM, 황인환 [EMAIL PROTECTED] wrote:
Hi.
I want to know read and write throughput data of HDFS
Where could I get
(values.hasNext()) {
sum += (int) values.next().get();
}
FloatWritable badProb = new FloatWritable((float) sum / spamTotal);
output.collect(key, badProb);
}
}
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
Oh-ha, that's simple. :)
/Edward J. Yoon
On Tue, Oct 7, 2008 at 7:14 PM, Miles Osborne [EMAIL PROTECTED] wrote:
this is a well known problem. basically, you want to aggregate values
computed at some previous step.
--emit category,probability pairs and have the reducer simply sum-up
PROTECTED] wrote:
Edward J. Yoon wrote:
Hi all,
To reduce the efforts of the artificial management for planet-scale
mail service, I'm consider about the statistical spam filtering with
the SpamAssasin, Hadoop (distributed computing), Hama (parallel matrix
computing) projects.
Please any
If we have a group blog of the hadoop user/dev group such as a Y!
developer network, we can easily share/introduce our experience and
outcomes from our research. So, I thought about a group blog, I guess
there are plenty of contributors.
What do you think about it?
--
Best regards, Edward J
Oh, Great!! Now I did know that. :)
On Thu, Oct 9, 2008 at 12:39 AM, Steve Loughran [EMAIL PROTECTED] wrote:
Edward J. Yoon wrote:
If we have a group blog of the hadoop user/dev group such as a Y!
developer network, we can easily share/introduce our experience and
outcomes from our research
. But isn't wiki a better tool to catch and
shape collective knowledge?
Lukas
On Wed, Oct 8, 2008 at 5:39 PM, Steve Loughran [EMAIL PROTECTED] wrote:
Edward J. Yoon wrote:
If we have a group blog of the hadoop user/dev group such as a Y!
developer network, we can easily share/introduce our
: java.io.IOException: error=12, Cannot
allocate memory
at java.lang.UNIXProcess.init(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.java:65)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
... 10 more
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED
name) is deprecated.
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
Path.getParent() returns the parent of a path.
On Tue, Oct 14, 2008 at 7:30 PM, Tarandeep Singh [EMAIL PROTECTED] wrote:
Hi,
How can I get absolute path: /user/taran/logfiles/log.txt
from Path- new Path( logfiles/log.txt);
Thanks,
Taran
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED
Anybody knows?
/Edward
On Fri, Oct 10, 2008 at 4:52 PM, Edward J. Yoon [EMAIL PROTECTED] wrote:
Hi,
To get a number of reduce_output_records, I was write code as:
long rows = rJob.getCounters().findCounter(
org.apache.hadoop.mapred.Task$Counter, 8, REDUCE_OUTPUT_RECORDS
.
Thanks.
--
View this message in context:
http://www.nabble.com/Career-Opportunity-in-Hadoop-tp20016797p20016797.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org
minutes to execute, but the system scales for very large datasets and
result
sets because it doesn't try to resolve queries in memory. We're
currently
testing with more than 150MM triples and have been happy with the
results.
-Colin
Edward J. Yoon wrote:
Hi all,
This RDF proposal is a good
for whole internet graph
processing (see http://www.youtube.com/watch?v=BT-piFBP4fE). So, if Yahoo!
needs scaling algorithm for really large tasks, what do they use if not
Hadoop?
Regards,
Lukas
--
http://blog.lukas-vlcek.com/
--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http
Hi,
I'd like to find a counter of REDUCE_OUTPUT_RECORDS after job done.
BTW, org.apache.hadoop.mapred.Task.Counter is not visible and, a
findCounter(String, int, String) is deprecated.
What is the best code?
--
Best regards, Edward J. Yoon @ NHN, corp.
[EMAIL PROTECTED]
http://blog.udanax.org
Message-
From: [EMAIL PROTECTED] On Behalf Of Edward J. Yoon
Sent: Thursday, October 09, 2008 2:07 AM
To: core-user@hadoop.apache.org
Subject: Re: Cannot run program bash: java.io.IOException: error=12,
Cannot allocate memory
Thanks Alexander!!
On Thu, Oct 9, 2008 at 4:49 PM, Alexander
to a relational database might be more application dependent but still
possible.
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
You can kill jobs using job command.
./bin/hadoop job -kill job-id
/Edward
On Tue, Jan 13, 2009 at 11:10 AM, Samuel Guo guosi...@gmail.com wrote:
Hi all,
Is there any method that I can use to stop or suspend a runing job in
Hadoop?
Regards,
Samuel
--
Best Regards, Edward J. Yoon
/
I don't know of its current status.
- Andy
From: Amandeep Khurana
Subject: RDF store over HDFS/HBase
Has anyone explored using HDFS/HBase as the underlying
storage for an RDF store?
--
Best Regards, Edward J. Yoon
edwardy...@apache.org
http://blog.udanax.org
Does anyone have a input formatter for bzip2?
--
Best Regards, Edward J. Yoon
edwardy...@apache.org
http://blog.udanax.org
Hi,
I wanted to read the data in EUC-KR format using UTF-8, so I set a up
a JVM parameter -Dfile.encoding=EUC-KR in the HADOOP_OPTS. But, it did
not work. Is there any other method than coding my own input format?
--
Best Regards, Edward J. Yoon
edwardy...@apache.org
http://blog.udanax.org
My typos, using TextInputFormat (UTF-8)
On Thu, Apr 16, 2009 at 4:18 PM, Edward J. Yoon edwardy...@apache.org wrote:
Hi,
I wanted to read the data in EUC-KR format using UTF-8, so I set a up
a JVM parameter -Dfile.encoding=EUC-KR in the HADOOP_OPTS. But, it did
not work. Is there any other
. The image processing code is
written in Matlab. If I invoke that code from a shell script and then use
that shell script within Hadoop streaming, will that work? Has anyone done
something along these lines?
Many thaks,
--ST.
--
Best Regards, Edward J. Yoon
edwardy...@apache.org
http
use Hadoop
framework to process the data in parallel fashion. One Matlab instance
handling few hundred images (as a mapper) and have hundreds of such
instances and then combine (reducer) the o/p of each instance.
On Tue, Apr 21, 2009 at 5:06 PM, Edward J. Yoon edwardy...@apache.orgwrote:
Hi
jobs with: 3 node: 1 server and 2 slaver
Please help me!
Thanks.
Best,
Nguyen.
--
Best Regards, Edward J. Yoon
edwardy...@apache.org
http://blog.udanax.org
How do you to add input paths?
On Wed, Apr 22, 2009 at 5:09 PM, nguyenhuynh.mr
nguyenhuynh...@gmail.com wrote:
Edward J. Yoon wrote:
Hi,
In that case, The atomic unit of split is a file. So, you need to
increase the number of files. or Use the TextInputFormat as below
, args[1]);
c.setMapperClass(InnerMap.class);
c.setNumReduceTasks(0);
c.setOutputFormat(NullOutputFormat.class);
return c;
}
On Thu, Apr 23, 2009 at 6:19 PM, nguyenhuynh.mr
nguyenhuynh...@gmail.com wrote:
Edward J. Yoon wrote:
How do you to add input paths?
On Wed, Apr 22
cluster is restarted. But otherwise,
how can these be cleaned up without having to restart cluster.
Conf parameter keep.failed.task.files is set to false in our case.
Many Thanks
Sandhya
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
.
System.out.println(line);
}
}
If I run this code nothing shows up. But if execute the command (hadoop jar
GraphClean args) from the command line it works fine. I am using hadoop
0.19.0.
Thanks,
Razen
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http
Sent from the Hadoop core-user mailing list archive at Nabble.com.
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
- robert
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
on
hive-u...@hadoop.apache.org
.
-- Owen
--
View this message in context:
http://www.nabble.com/How-to-Rename---Create-DB-Table-in-Hadoop--tp23629956p23637131.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy
should be asking on
hive-u...@hadoop.apache.org
.
-- Owen
--
View this message in context:
http://www.nabble.com/How-to-Rename---Create-DB-Table-in-Hadoop--tp23629956p23637131.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
--
Best Regards, Edward J. Yoon
have one replication but
the output from reduce always have replication 3. Can anyone please tell
why it is so?
Thanks Regards
Aseem Puri
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
.
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
of
scientific application running on large clusters using Hadoop.
Best Regards
Guillaume
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
do you think?
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
What do you think about another new computation framework on HDFS?
On Mon, Jun 22, 2009 at 3:50 PM, Edward J. Yoon edwardy...@apache.org wrote:
http://googleresearch.blogspot.com/2009/06/large-scale-graph-computing-at-google.html
-- It sounds like Pregel seems, a computing framework based
. Yoon wrote:
What do you think about another new computation framework on HDFS?
On Mon, Jun 22, 2009 at 3:50 PM, Edward J. Yoon edwardy...@apache.org
wrote:
http://googleresearch.blogspot.com/2009/06/large-scale-graph-computing-at-google.html
-- It sounds like Pregel seems, a computing
Hi,
I always get the 'could not lock file' error when editing/creating
pages - Page could not get locked. Missing 'current' file?
My ID is 'udanax'. Someone can help me?
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
71 matches
Mail list logo