Chukwa documentation

2009-02-03 Thread xavier.quintuna
Hi Everybody,

I don't know if there is a mail list for Chukwa so I apologies in
advance if this is not the right place to ask my questions.

I have the following questions and comments:

It was simple the configuration of the collector and the agent. 
However, there is other features that are not documented it all like:
- torque (Do I have to install torque before? Yes? No? and Why?), 
- database, (Do I have to have a DB?) 
-what is queueinfo.properties, which kind of information provides me?
-and there is more stuff that I need to dig in the code to understand.
Could somebody update the documentation from Chukwa?.


I hope I can get some help

Xavier


Fuse-j-hadoopfs

2008-07-15 Thread xavier.quintuna
Hi everybody, 

I was wondering if the people that mainting fuse-j-hadoopfs is working
to support the HDFS permissions?

Another thing, the people who works in fuse_dfs is it possible to clean
the scripts fuse_dfs_wrapper.sh and the Makefiles.
There is a lot of things from facebook so that will be nice if the
scripts are more clean and generic.
 
Thanks for the answer,

Xavier


Is it possible to access the HDFS using webservices?

2008-06-30 Thread xavier.quintuna
Hi everybody, 

I'm trying to access the hdfs using web services. The idea is that the
web service client can access the HDFS using SOAP or REST and has to
support all the hdfs shell commands. 

Is it some work around this?.

I really appreciate any feedback,

Xavier




Fuse-j-hadoopfs

2008-04-08 Thread xavier.quintuna
Hi everybody,

I have a question about fuse-j-hadoopfs. Do it handles the hadoop
permissions ?

I'm using hadoop.0.16.3

Thanks

X


HDFS Vs GlusterFs

2008-03-14 Thread xavier.quintuna
Hi Everyone,

I have a question to the mailing list related to storage:

What will be the advantages or disadvantages between HDFS and
GlusterFS?. 

Is Hadoop able to run in Infiniband? 

I appreciate any comments

Xavier



RE: How to compile fuse-dfs

2008-03-11 Thread xavier.quintuna
Yes, all those commands works fine but I need to copy a file to the hdfs


Thanks again.

Xavier

-Original Message-
From: Pete Wyckoff [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 11, 2008 11:26 AM
To: core-user@hadoop.apache.org
Subject: Re: How to compile fuse-dfs


But, to be clear, you can do  mv, rm, mkdir, rmdir.


On 3/11/08 10:24 AM, "[EMAIL PROTECTED]"
<[EMAIL PROTECTED]> wrote:

> Thanks Pete. I'll be waiting for 0.17 then



RE: How to compile fuse-dfs

2008-03-11 Thread xavier.quintuna

Thanks Pete. I'll be waiting for 0.17 then

Xavier
 

-Original Message-
From: Pete Wyckoff [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 11, 2008 11:16 AM
To: QUINTUNA Xavier RD-ILAB-SSF; core-user@hadoop.apache.org
Subject: Re: How to compile fuse-dfs


Oh sorry xavier - you can't write to DFS - although you shouldn't be
getting an exception. It should return an IO error but create an empty
file.

Fuse_dfs relies on appends working in DFS and since this didn't make it
into 16, we'll have to wait for 0.17 for this to work.

I will look at this error though.

-- pete


On 3/11/08 10:03 AM, "[EMAIL PROTECTED]"
<[EMAIL PROTECTED]> wrote:

> dfs,138549488,'FLV',4096) Exception in thread "Thread-7"
> java.nio.BufferOverflowException



RE: How to compile fuse-dfs

2008-03-11 Thread xavier.quintuna
Hi Pete, 

I was able to compile the fuse_dfs.c. Thanks. But now I have another
question for you.
I'm able to read a file but I'm not able to copy a file to the hdfs. I
wonder if you solve this problem? And How?
In my logs I have this message

hdfsWrite(dfs,138549488,'FLV',4096) Exception in thread "Thread-7"
java.nio.BufferOverflowException at
java.nio.Buffer.nextPutIndex(Buffer.java:425) at
java.nio.HeapByteBuffer.putInt(HeapByteBuffer.java:347) at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$Packet.writeInt(DFSClien
t.java:1537) at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.writeChunk(DFSClient.jav
a:2128) at
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.ja
va:141) at
org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:100)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:86)
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutput
Stream.java:41) at
java.io.DataOutputStream.write(DataOutputStream.java:90) at
java.io.FilterOutputStream.write(FilterOutputStream.java:80) Call to
org.apache.hadoop.fs.FSDataOutputStream::write failed!
./fuse_dfs[14561]: ERROR: fuse problem - could not write all the bytes
for /user/xavier/movie/le_plat.flv -1!=4096fuse_dfs.c:702 08/03/11
10:57:50 WARN fs.DFSClient: DataStreamer Exception: java.io.IOException:
BlockSize 0 is smaller than data size.  Offset of packet in block 0
Aborting file /user/xavier/movie/le_plat.flv 
---

I really appreciate your help

Xavier


-Original Message-
From: Pete Wyckoff [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 10, 2008 7:43 PM
To: core-user@hadoop.apache.org
Subject: Re: How to compile fuse-dfs


Hi Xavier,

If you run ./bootsrap.sh does it not create a Makefile for you?  There
is a bug in the Makefile that hardcodes it to amd64. I will look at
this.

What kernel are you using and what HW?

--pete


On 3/10/08 2:23 PM, "[EMAIL PROTECTED]"
<[EMAIL PROTECTED]> wrote:

> Hi everybody,
> 
> I'm trying to compile fuse-dfs but I have problems. I don't have a lot

> of experience with C++.
> I would like to know:
> Is it a clear readme file with the instructions to compile, install 
> fuse-dfs?
> Do I need to replace  fuse_dfs.c with the one in 
> fuse-dfs/src/fuse_dfs.c?
> Do I need to set up different flag if I'm using a i386 or 86 machine?
> Which one and Where?
> Which make file do I need to use to compile the code?
> 
> 
> 
> Thanks
> 
> Xavier
> 
> 
> 



How to compile fuse-dfs

2008-03-10 Thread xavier.quintuna
Hi everybody,

I'm trying to compile fuse-dfs but I have problems. I don't have a lot
of experience with C++.
I would like to know:
Is it a clear readme file with the instructions to compile, install
fuse-dfs?
Do I need to replace  fuse_dfs.c with the one in
fuse-dfs/src/fuse_dfs.c?
Do I need to set up different flag if I'm using a i386 or 86 machine?
Which one and Where?
Which make file do I need to use to compile the code?



Thanks 

Xavier





RE: Hadoop summit / workshop at Yahoo!

2008-02-22 Thread xavier.quintuna
I agree, I love to be part of this but the rooms are full.
Xavier

-Original Message-
From: Stefan Groschupf [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 22, 2008 11:04 AM
To: core-user@hadoop.apache.org
Subject: Re: Hadoop summit / workshop at Yahoo!

Puhh, 2 days and it is full?
Does Yahoo have no bigger rooms than just for a 100 people?



On Feb 20, 2008, at 12:10 PM, Ajay Anand wrote:

> The registration page for the Hadoop summit is now up:
> http://developer.yahoo.com/hadoop/summit/
>
> Space is limited, so please sign up early if you are interested in 
> attending.
>
> About the summit:
> Yahoo! is hosting the first summit on Apache Hadoop on March 25th in 
> Sunnyvale. The summit is sponsored by the Computing Community 
> Consortium
> (CCC) and brings together leaders from the Hadoop developer and user 
> communities. The speakers will cover topics in the areas of extensions

> being developed for Hadoop, case studies of applications being built 
> and deployed on Hadoop, and a discussion on future directions for the 
> platform.
>
> Agenda:
> 8:30-8:55 Breakfast
> 8:55-9:00 Welcome to Yahoo! & Logistics - Ajay Anand, Yahoo!
> 9:00-9:30 Hadoop Overview - Doug Cutting / Eric Baldeschwieler, Yahoo!
> 9:30-10:00 Pig - Chris Olston, Yahoo!
> 10:00-10:30 JAQL - Kevin Beyer, IBM
> 10:30-10:45 Break
> 10:45-11:15 DryadLINQ - Michael Isard, Microsoft
> 11:15-11:45 Monitoring Hadoop using X-Trace - Andy Konwinski and Matei

> Zaharia, UC Berkeley
> 11:45-12:15 Zookeeper - Ben Reed, Yahoo!
> 12:15-1:15 Lunch
> 1:15-1:45 Hbase - Michael Stack, Powerset
> 1:45-2:15 Hbase App - Bryan Duxbury, Rapleaf
> 2:15-2:45 Hive - Joydeep Sen Sarma, Facebook 2:45-3:00 Break 3:00-3:20

> Building Ground Models of Southern California - Steve Schossler, David

> O'Hallaron, Intel / CMU 3:20-3:40 Online search for engineering design

> content - Mike Haley, Autodesk 3:40-4:00 Yahoo - Webmap - Arnab 
> Bhattacharjee, Yahoo!
> 4:00-4:30 Natural language Processing - Jimmy Lin, U of Maryland / 
> Christophe Bisciglia, Google
> 4:30-4:45 Break
> 4:45-5:30 Panel on future directions
> 5:30-7:00 Happy hour
>
> Look forward to seeing you there!
> Ajay
>
> -Original Message-
> From: Bradford Stephens [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, February 20, 2008 9:17 AM
> To: core-user@hadoop.apache.org
> Subject: Re: Hadoop summit / workshop at Yahoo!
>
> Hrm yes, I'd like to make a visit as well :)
>
> On Feb 20, 2008 8:05 AM, C G <[EMAIL PROTECTED]> wrote:
>>  Hey All:
>>
>>  Is this going forward?  I'd like to make plans to attend and the
> sooner I can get plane tickets the happier the bean counters will be 
> :-).
>>
>>  Thx,
>>  C G
>>
>>> Ajay Anand wrote:

 Yahoo plans to host a summit / workshop on Apache Hadoop at our 
 Sunnyvale campus on March 25th. Given the interest we are seeing
> from
 developers in a broad range of organizations, this seems like a
> good
 time to get together and brief each other on the progress that is 
 being made.



 We would like to cover topics in the areas of extensions being 
 developed for Hadoop, innovative applications being built and 
 deployed on Hadoop, and future extensions to the platform. Some of 
 the speakers who
> have
 already committed to present are from organizations such as IBM, 
 Intel, Carnegie Mellon University, UC Berkeley, Facebook and 
 Yahoo!, and we are actively recruiting other leaders in the space.



 If you have an innovative application you would like to talk about,

 please let us know. Although there are limitations on the amount of

 time we have, we would love to hear from you. You can contact me at

 [EMAIL PROTECTED]



 Thanks and looking forward to hearing about your cool apps,

 Ajay





>>>
>>> --
>>> View this message in context:
> http://www.nabble.com/Hadoop-summit---workshop-at-Yahoo%21-tp14889262p
> 15
> 393386.html
>>> Sent from the Hadoop lucene-users mailing list archive at
> Nabble.com.
>>>
>>
>>
>>
>>
>>
>>
>> -
>> Be a better friend, newshound, and know-it-all with Yahoo! Mobile.
> Try it now.
>

~~~
101tec Inc.
Menlo Park, California, USA
http://www.101tec.com




RE: How to split the hdfs in different subgroups

2008-02-22 Thread xavier.quintuna
I read the docs about rack awareness but my issue is how the client can
pick some specific datanodes, which are located in some specific rack,
to write the block there. The idea is that the client is able to write
the block in two separated groups of datanodes in the same hdfs. For
instance: bin/hadoop dfs -put   -location 

Xavier



-Original Message-
From: Raghu Angadi [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 22, 2008 10:59 AM
To: core-user@hadoop.apache.org
Subject: Re: How to split the hdfs in different subgroups


You could probably treat these two groups as different "racks". You can
read about rackawareness in
http://hadoop.apache.org/core/docs/r0.16.0/hdfs_user_guide.html , and
follow the links from there for more information regd how to configure
etc.

Raghu.

[EMAIL PROTECTED] wrote:
> Hi There,
> 
> I have a hdfs and I want to split the cluster in two groups. Each 
> groups have a set of datanodes. I want to be able that my client 
> (hdfshell) only can write in one group. One group is in one rack and 
> my other group is in the other rack. Replication between racks is 
> allowed but the client has to read and write from one specific group.
> Is it possible?
> 
> I appreciate any help
> 
> Xavier
> 



How to split the hdfs in different subgroups

2008-02-21 Thread xavier.quintuna
Hi There,

I have a hdfs and I want to split the cluster in two groups. Each groups
have a set of datanodes. I want to be able that my client (hdfshell)
only can write in one group. One group is in one rack and my other group
is in the other rack. Replication between racks is allowed but the
client has to read and write from one specific group. 
Is it possible?

I appreciate any help

Xavier