Re: Performance / cluster scaling question

2008-04-01 Thread André Martin

Hi Chris & Hadoopers,
we changed our system architecture in that way so that most of the data 
is now streamed directly from the spiders/crawlers nodes instead of 
using/creating temporary files on the DFS - now it performs way better 
and the exceptions are gone :-) ...seems to be a good decision when 
having only a relatively small cluster (like ours w/ 8 data nodes) where 
the deletion of blocks seems not to catch up with the creation of new 
temp files (through the max 100 blocks/3 seconds deletion "restriction").


Cu on the 'net,
 Bye - bye,

<<<<< André <<<< >>>> èrbnA >>>>>

Chris K Wensel wrote:
If it's any consolation, I'm seeing similar behaviors on 0.16.0 when 
running on EC2 when I push the number of nodes in the cluster past 40.


On Mar 24, 2008, at 6:31 AM, André Martin wrote:

Thanks for the clarification, dhruba :-)
Anyway, what can cause those other exceptions such as  "Could not get 
block locations" and "DataXceiver: java.io.EOFException"? Can anyone 
give me a little more insight about those exceptions?
And does anyone have a similar workload (frequent writes and deletion 
of small files), and what could cause the performance degradation 
(see first post)?  I think HDFS should be able to handle two million 
and more files/blocks...
Also, I observed that some of my datanodes do not "heartbeat" to the 
namenode for several seconds (up to 400 :-() from time to time - when 
I check those specific datanodes and do a "top", I see the "du" 
command running that seems to got stuck?!?

Thanks and Happy Easter :-)

Cu on the 'net,
  Bye - bye,

 <<<<< André <<<< >>>> èrbnA >>>>>

dhruba Borthakur wrote:

The namenode lazily instructs a Datanode to delete blocks. As a 
response to every heartbeat from a Datanode, the Namenode instructs 
it to delete a maximum on 100 blocks. Typically, the heartbeat 
periodicity is 3 seconds. The heartbeat thread in the Datanode 
deletes the block files synchronously before it can send the next 
heartbeat. That's the reason a small number (like 100) was chosen.


If you have 8 datanodes, your system will probably delete about 800 
blocks every 3 seconds.


Thanks,
dhruba

-Original Message-
From: André Martin [mailto:[EMAIL PROTECTED] Sent: Friday, March 
21, 2008 3:06 PM

To: core-user@hadoop.apache.org
Subject: Re: Performance / cluster scaling question

After waiting a few hours (without having any load), the block 
number and "DFS Used" space seems to go down...
My question is: is the hardware simply too weak/slow to send the 
block deletion request to the datanodes in a timely manner, or do 
simply those "crappy" HDDs cause the delay, since I noticed that I 
can take up to 40 minutes when deleting ~400.000 files at once 
manually using "rm -r"...
Actually - my main concern is why the performance à la the 
throughput goes down - any ideas?




Re: Performance / cluster scaling question

2008-03-28 Thread Doug Cutting

Doug Cutting wrote:
Seems like we should force things onto the same availablity zone by 
default, now that this is available.  Patch, anyone?


It's already there!  I just hadn't noticed.

https://issues.apache.org/jira/browse/HADOOP-2410

Sorry for missing this, Chris!

Doug


Re: Performance / cluster scaling question

2008-03-28 Thread Doug Cutting

Chris K Wensel wrote:
FYI, Just ran a 50 node cluster using one of the new kernels for Fedora 
with all nodes forced onto the same 'availability zone' and there were 
no timeouts or failed writes.


Seems like we should force things onto the same availablity zone by 
default, now that this is available.  Patch, anyone?


Doug


Re: Performance / cluster scaling question

2008-03-27 Thread Chris K Wensel
FYI, Just ran a 50 node cluster using one of the new kernels for  
Fedora with all nodes forced onto the same 'availability zone' and  
there were no timeouts or failed writes.


On Mar 27, 2008, at 4:16 PM, Chris K Wensel wrote:
If it's any consolation, I'm seeing similar behaviors on 0.16.0 when  
running on EC2 when I push the number of nodes in the cluster past 40.


On Mar 24, 2008, at 6:31 AM, André Martin wrote:

Thanks for the clarification, dhruba :-)
Anyway, what can cause those other exceptions such as  "Could not  
get block locations" and "DataXceiver: java.io.EOFException"? Can  
anyone give me a little more insight about those exceptions?
And does anyone have a similar workload (frequent writes and  
deletion of small files), and what could cause the performance  
degradation (see first post)?  I think HDFS should be able to  
handle two million and more files/blocks...
Also, I observed that some of my datanodes do not "heartbeat" to  
the namenode for several seconds (up to 400 :-() from time to time  
- when I check those specific datanodes and do a "top", I see the  
"du" command running that seems to got stuck?!?

Thanks and Happy Easter :-)

Cu on the 'net,
 Bye - bye,

<<<<< André <<<< >>>> èrbnA >>>>>

dhruba Borthakur wrote:

The namenode lazily instructs a Datanode to delete blocks. As a  
response to every heartbeat from a Datanode, the Namenode  
instructs it to delete a maximum on 100 blocks. Typically, the  
heartbeat periodicity is 3 seconds. The heartbeat thread in the  
Datanode deletes the block files synchronously before it can send  
the next heartbeat. That's the reason a small number (like 100)  
was chosen.


If you have 8 datanodes, your system will probably delete about  
800 blocks every 3 seconds.


Thanks,
dhruba

-Original Message-----
From: André Martin [mailto:[EMAIL PROTECTED] Sent: Friday,  
March 21, 2008 3:06 PM

To: core-user@hadoop.apache.org
Subject: Re: Performance / cluster scaling question

After waiting a few hours (without having any load), the block  
number and "DFS Used" space seems to go down...
My question is: is the hardware simply too weak/slow to send the  
block deletion request to the datanodes in a timely manner, or do  
simply those "crappy" HDDs cause the delay, since I noticed that I  
can take up to 40 minutes when deleting ~400.000 files at once  
manually using "rm -r"...
Actually - my main concern is why the performance à la the  
throughput goes down - any ideas?




Chris K Wensel
[EMAIL PROTECTED]
http://chris.wensel.net/





Chris K Wensel
[EMAIL PROTECTED]
http://chris.wensel.net/
http://www.cascading.org/






Re: Performance / cluster scaling question

2008-03-27 Thread Chris K Wensel
If it's any consolation, I'm seeing similar behaviors on 0.16.0 when  
running on EC2 when I push the number of nodes in the cluster past 40.


On Mar 24, 2008, at 6:31 AM, André Martin wrote:

Thanks for the clarification, dhruba :-)
Anyway, what can cause those other exceptions such as  "Could not  
get block locations" and "DataXceiver: java.io.EOFException"? Can  
anyone give me a little more insight about those exceptions?
And does anyone have a similar workload (frequent writes and  
deletion of small files), and what could cause the performance  
degradation (see first post)?  I think HDFS should be able to handle  
two million and more files/blocks...
Also, I observed that some of my datanodes do not "heartbeat" to the  
namenode for several seconds (up to 400 :-() from time to time -  
when I check those specific datanodes and do a "top", I see the "du"  
command running that seems to got stuck?!?

Thanks and Happy Easter :-)

Cu on the 'net,
  Bye - bye,

 <<<<< André <<<< >>>> èrbnA >>>>>

dhruba Borthakur wrote:

The namenode lazily instructs a Datanode to delete blocks. As a  
response to every heartbeat from a Datanode, the Namenode instructs  
it to delete a maximum on 100 blocks. Typically, the heartbeat  
periodicity is 3 seconds. The heartbeat thread in the Datanode  
deletes the block files synchronously before it can send the next  
heartbeat. That's the reason a small number (like 100) was chosen.


If you have 8 datanodes, your system will probably delete about 800  
blocks every 3 seconds.


Thanks,
dhruba

-Original Message-
From: André Martin [mailto:[EMAIL PROTECTED] Sent: Friday, March  
21, 2008 3:06 PM

To: core-user@hadoop.apache.org
Subject: Re: Performance / cluster scaling question

After waiting a few hours (without having any load), the block  
number and "DFS Used" space seems to go down...
My question is: is the hardware simply too weak/slow to send the  
block deletion request to the datanodes in a timely manner, or do  
simply those "crappy" HDDs cause the delay, since I noticed that I  
can take up to 40 minutes when deleting ~400.000 files at once  
manually using "rm -r"...
Actually - my main concern is why the performance à la the  
throughput goes down - any ideas?




Chris K Wensel
[EMAIL PROTECTED]
http://chris.wensel.net/





Re: Performance / cluster scaling question

2008-03-24 Thread André Martin

Thanks for the clarification, dhruba :-)
Anyway, what can cause those other exceptions such as  "Could not get 
block locations" and "DataXceiver: java.io.EOFException"? Can anyone 
give me a little more insight about those exceptions?
And does anyone have a similar workload (frequent writes and deletion of 
small files), and what could cause the performance degradation (see 
first post)?  I think HDFS should be able to handle two million and more 
files/blocks...
Also, I observed that some of my datanodes do not "heartbeat" to the 
namenode for several seconds (up to 400 :-() from time to time - when I 
check those specific datanodes and do a "top", I see the "du" command 
running that seems to got stuck?!?

Thanks and Happy Easter :-)

Cu on the 'net,
   Bye - bye,

  <<<<< André <<<< >>>> èrbnA >>>>>

dhruba Borthakur wrote:


The namenode lazily instructs a Datanode to delete blocks. As a response to 
every heartbeat from a Datanode, the Namenode instructs it to delete a maximum 
on 100 blocks. Typically, the heartbeat periodicity is 3 seconds. The heartbeat 
thread in the Datanode deletes the block files synchronously before it can send 
the next heartbeat. That's the reason a small number (like 100) was chosen.

If you have 8 datanodes, your system will probably delete about 800 blocks 
every 3 seconds.

Thanks,
dhruba

-Original Message-
From: André Martin [mailto:[EMAIL PROTECTED] 
Sent: Friday, March 21, 2008 3:06 PM

To: core-user@hadoop.apache.org
Subject: Re: Performance / cluster scaling question

After waiting a few hours (without having any load), the block number 
and "DFS Used" space seems to go down...
My question is: is the hardware simply too weak/slow to send the block 
deletion request to the datanodes in a timely manner, or do simply those 
"crappy" HDDs cause the delay, since I noticed that I can take up to 40 
minutes when deleting ~400.000 files at once manually using "rm -r"...
Actually - my main concern is why the performance à la the throughput 
goes down - any ideas?




RE: Performance / cluster scaling question

2008-03-21 Thread dhruba Borthakur
The namenode lazily instructs a Datanode to delete blocks. As a response to 
every heartbeat from a Datanode, the Namenode instructs it to delete a maximum 
on 100 blocks. Typically, the heartbeat periodicity is 3 seconds. The heartbeat 
thread in the Datanode deletes the block files synchronously before it can send 
the next heartbeat. That's the reason a small number (like 100) was chosen.

If you have 8 datanodes, your system will probably delete about 800 blocks 
every 3 seconds.

Thanks,
dhruba

-Original Message-
From: André Martin [mailto:[EMAIL PROTECTED] 
Sent: Friday, March 21, 2008 3:06 PM
To: core-user@hadoop.apache.org
Subject: Re: Performance / cluster scaling question

After waiting a few hours (without having any load), the block number 
and "DFS Used" space seems to go down...
My question is: is the hardware simply too weak/slow to send the block 
deletion request to the datanodes in a timely manner, or do simply those 
"crappy" HDDs cause the delay, since I noticed that I can take up to 40 
minutes when deleting ~400.000 files at once manually using "rm -r"...
Actually - my main concern is why the performance à la the throughput 
goes down - any ideas?



Re: Performance / cluster scaling question

2008-03-21 Thread André Martin
After waiting a few hours (without having any load), the block number 
and "DFS Used" space seems to go down...
My question is: is the hardware simply too weak/slow to send the block 
deletion request to the datanodes in a timely manner, or do simply those 
"crappy" HDDs cause the delay, since I noticed that I can take up to 40 
minutes when deleting ~400.000 files at once manually using "rm -r"...
Actually - my main concern is why the performance à la the throughput 
goes down - any ideas?




Re: Performance / cluster scaling question

2008-03-21 Thread Ted Dunning

The delay may be in reporting the deleted blocks as free on the web
interface as much as in actually marking them as deleted.


On 3/21/08 2:48 PM, "André Martin" <[EMAIL PROTECTED]> wrote:

> Right, I totally forgot about the replication factor... However
> sometimes I even noticed ratios of 5:1 for block numbers to files...
> Is the delay for block deletion/reclaiming an intended behavior?
> 
> Jeff Eastman wrote:
>> That makes the math come out a lot closer (3*423763=1271289). I've also
>> noticed there is some delay in reclaiming unused blocks so what you are
>> seeing in terms of block allocations do not surprise me.
>> 
>>   
>>> -Original Message-
>>> From: André Martin [mailto:[EMAIL PROTECTED]
>>> Sent: Friday, March 21, 2008 2:36 PM
>>> To: core-user@hadoop.apache.org
>>> Subject: Re: Performance / cluster scaling question
>>> 
>>> 3 - the default one...
>>> 
>>> Jeff Eastman wrote:
>>> 
>>>> What's your replication factor?
>>>> Jeff
>>>> 
>>>> 
>>>>   
>>>>> -Original Message-
>>>>> From: André Martin [mailto:[EMAIL PROTECTED]
>>>>> Sent: Friday, March 21, 2008 2:25 PM
>>>>> To: core-user@hadoop.apache.org
>>>>> Subject: Performance / cluster scaling question
>>>>> 
>>>>> Hi everyone,
>>>>> I ran a distributed system that consists of 50 spiders/crawlers and 8
>>>>> server nodes with a Hadoop DFS cluster with 8 datanodes and a
>>>>> 
>>> namenode...
>>> 
>>>>> Each spider has 5 job processing / data crawling threads and puts
>>>>> crawled data as one complete file onto the DFS - additionally there are
>>>>> splits created for each server node that are put as files onto the DFS
>>>>> as well. So basically there are 50*5*9 = ~2250 concurrent writes across
>>>>> 8 datanodes.
>>>>> The splits are read by the server nodes and will be deleted afterwards,
>>>>> so those (split)-files exists for only a few seconds to minutes...
>>>>> Since 99% of the files have a size of less than 64 MB (the default
>>>>> 
>>> block
>>> 
>>>>> size) I expected that the number of files is roughly equal to the
>>>>> 
>>> number
>>> 
>>>>> of blocks. After running the system for 24hours the namenode WebUI
>>>>> 
>>> shows
>>> 
>>>>> 423763 files and directories and 1480735 blocks. It looks like that the
>>>>> system does not catch up with deleting all the invalidated blocks - my
>>>>> assumption?!?
>>>>> Also, I noticed that the overall performance of the cluster goes down
>>>>> (see attached image).
>>>>> There are a bunch of Could not get block locations. Aborting...
>>>>> exceptions and those exceptions seem to appear more frequently towards
>>>>> the end of the experiment.
>>>>> 
>>>>> 
>>>>>> java.io.IOException: Could not get block locations. Aborting...
>>>>>> at
>>>>>> 
>>>>>> 
>>>>>>   
>>> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSCl
>>> 
>>>>> ient.java:1824)
>>>>> 
>>>>> 
>>>>>> at
>>>>>> 
>>>>>> 
>>>>>>   
>>> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1100(DFSClient.java
>>> 
>>>>> :1479)
>>>>> 
>>>>> 
>>>>>> at
>>>>>> 
>>>>>> 
>>>>>>   
>>> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient
>>> 
>>>>> .java:1571)
>>>>> So, is the cluster simply saturated with the such a frequent creation
>>>>> and deletion of files, or is the network that actual bottleneck? The
>>>>> work load does not change at all during the whole experiment.
>>>>> On cluster side I see lots of the following exceptions:
>>>>> 
>>>>> 
>>> = >>> 2008-03-21 20:28:05,411 INFO org.apache.hadoop.dfs.DataNode:
>>> 
>>>>>> PacketRespond

RE: Performance / cluster scaling question

2008-03-21 Thread Jeff Eastman
I wouldn't call it a design feature so much as a consequence of background
processing in the NameNode to clean up the recently-closed files and reclaim
their blocks.

Jeff

> -Original Message-
> From: André Martin [mailto:[EMAIL PROTECTED]
> Sent: Friday, March 21, 2008 2:48 PM
> To: core-user@hadoop.apache.org
> Subject: Re: Performance / cluster scaling question
> 
> Right, I totally forgot about the replication factor... However
> sometimes I even noticed ratios of 5:1 for block numbers to files...
> Is the delay for block deletion/reclaiming an intended behavior?
> 
> Jeff Eastman wrote:
> > That makes the math come out a lot closer (3*423763=1271289). I've also
> > noticed there is some delay in reclaiming unused blocks so what you are
> > seeing in terms of block allocations do not surprise me.
> >
> >
> >> -Original Message-
> >> From: André Martin [mailto:[EMAIL PROTECTED]
> >> Sent: Friday, March 21, 2008 2:36 PM
> >> To: core-user@hadoop.apache.org
> >> Subject: Re: Performance / cluster scaling question
> >>
> >> 3 - the default one...
> >>
> >> Jeff Eastman wrote:
> >>
> >>> What's your replication factor?
> >>> Jeff
> >>>
> >>>
> >>>
> >>>> -Original Message-
> >>>> From: André Martin [mailto:[EMAIL PROTECTED]
> >>>> Sent: Friday, March 21, 2008 2:25 PM
> >>>> To: core-user@hadoop.apache.org
> >>>> Subject: Performance / cluster scaling question
> >>>>
> >>>> Hi everyone,
> >>>> I ran a distributed system that consists of 50 spiders/crawlers and 8
> >>>> server nodes with a Hadoop DFS cluster with 8 datanodes and a
> >>>>
> >> namenode...
> >>
> >>>> Each spider has 5 job processing / data crawling threads and puts
> >>>> crawled data as one complete file onto the DFS - additionally there
> are
> >>>> splits created for each server node that are put as files onto the
> DFS
> >>>> as well. So basically there are 50*5*9 = ~2250 concurrent writes
> across
> >>>> 8 datanodes.
> >>>> The splits are read by the server nodes and will be deleted
> afterwards,
> >>>> so those (split)-files exists for only a few seconds to minutes...
> >>>> Since 99% of the files have a size of less than 64 MB (the default
> >>>>
> >> block
> >>
> >>>> size) I expected that the number of files is roughly equal to the
> >>>>
> >> number
> >>
> >>>> of blocks. After running the system for 24hours the namenode WebUI
> >>>>
> >> shows
> >>
> >>>> 423763 files and directories and 1480735 blocks. It looks like that
> the
> >>>> system does not catch up with deleting all the invalidated blocks -
> my
> >>>> assumption?!?
> >>>> Also, I noticed that the overall performance of the cluster goes down
> >>>> (see attached image).
> >>>> There are a bunch of Could not get block locations. Aborting...
> >>>> exceptions and those exceptions seem to appear more frequently
> towards
> >>>> the end of the experiment.
> >>>>
> >>>>
> >>>>> java.io.IOException: Could not get block locations. Aborting...
> >>>>> at
> >>>>>
> >>>>>
> >>>>>
> >>
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSCl
> >>
> >>>> ient.java:1824)
> >>>>
> >>>>
> >>>>> at
> >>>>>
> >>>>>
> >>>>>
> >>
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1100(DFSClient.java
> >>
> >>>> :1479)
> >>>>
> >>>>
> >>>>> at
> >>>>>
> >>>>>
> >>>>>
> >>
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient
> >>
> >>>> .java:1571)
> >>>> So, is the cluster simply saturated with the such a frequent creation
> >>>> and deletion of files, or is the network that actual bottleneck? The
> >>>> work load does not change at all during the whole experiment.
> >>>> On cluster side I see lots of the following exceptions:
> >>>&g

Re: Performance / cluster scaling question

2008-03-21 Thread André Martin
Right, I totally forgot about the replication factor... However 
sometimes I even noticed ratios of 5:1 for block numbers to files...

Is the delay for block deletion/reclaiming an intended behavior?

Jeff Eastman wrote:

That makes the math come out a lot closer (3*423763=1271289). I've also
noticed there is some delay in reclaiming unused blocks so what you are
seeing in terms of block allocations do not surprise me.

  

-Original Message-
From: André Martin [mailto:[EMAIL PROTECTED]
Sent: Friday, March 21, 2008 2:36 PM
To: core-user@hadoop.apache.org
Subject: Re: Performance / cluster scaling question

3 - the default one...

Jeff Eastman wrote:


What's your replication factor?
Jeff


  

-Original Message-
From: André Martin [mailto:[EMAIL PROTECTED]
Sent: Friday, March 21, 2008 2:25 PM
To: core-user@hadoop.apache.org
Subject: Performance / cluster scaling question

Hi everyone,
I ran a distributed system that consists of 50 spiders/crawlers and 8
server nodes with a Hadoop DFS cluster with 8 datanodes and a


namenode...


Each spider has 5 job processing / data crawling threads and puts
crawled data as one complete file onto the DFS - additionally there are
splits created for each server node that are put as files onto the DFS
as well. So basically there are 50*5*9 = ~2250 concurrent writes across
8 datanodes.
The splits are read by the server nodes and will be deleted afterwards,
so those (split)-files exists for only a few seconds to minutes...
Since 99% of the files have a size of less than 64 MB (the default


block


size) I expected that the number of files is roughly equal to the


number


of blocks. After running the system for 24hours the namenode WebUI


shows


423763 files and directories and 1480735 blocks. It looks like that the
system does not catch up with deleting all the invalidated blocks - my
assumption?!?
Also, I noticed that the overall performance of the cluster goes down
(see attached image).
There are a bunch of Could not get block locations. Aborting...
exceptions and those exceptions seem to appear more frequently towards
the end of the experiment.



java.io.IOException: Could not get block locations. Aborting...
at


  

org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSCl


ient.java:1824)



at


  

org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1100(DFSClient.java


:1479)



at


  

org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient


.java:1571)
So, is the cluster simply saturated with the such a frequent creation
and deletion of files, or is the network that actual bottleneck? The
work load does not change at all during the whole experiment.
On cluster side I see lots of the following exceptions:



= >>> 2008-03-21 20:28:05,411 INFO org.apache.hadoop.dfs.DataNode:


PacketResponder 1 for block blk_6757062148746339382 terminating
2008-03-21 20:28:05,411 INFO org.apache.hadoop.dfs.DataNode:
writeBlock blk_6757062148746339382 received exception

  

java.io.EOFException



2008-03-21 20:28:05,411 ERROR org.apache.hadoop.dfs.DataNode:
141.xxx..xxx.xxx:50010:DataXceiver: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at


  

org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:22


63)



at


  

org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1150)


at
  

org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:938)


at java.lang.Thread.run(Unknown Source)
2008-03-21 19:26:46,535 INFO org.apache.hadoop.dfs.DataNode:
writeBlock blk_-7369396710977076579 received exception
java.net.SocketException: Connection reset
2008-03-21 19:26:46,535 ERROR org.apache.hadoop.dfs.DataNode:
141.xxx.xxx.xxx:50010:DataXceiver: java.net.SocketException:
Connection reset
at java.net.SocketInputStream.read(Unknown Source)
at java.io.BufferedInputStream.fill(Unknown Source)
at java.io.BufferedInputStream.read(Unknown Source)
at java.io.DataInputStream.readInt(Unknown Source)
at


  

org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:22


63)



at


  

org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1150)


at
  

org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:938)


at java.lang.Thread.run(Unknown Source)

  

I'm running Hadoop 0.16.1 - Has anyone made the same or a similar
experience.
How can the performance degradation be avoided? More datanodes? Why
seems the block deletion not to catch up with the deletion of the file?
Thanks in advance for your insights, ideas & suggestions :-)

Cu on the 'net,
Bye - bye,

 

RE: Performance / cluster scaling question

2008-03-21 Thread Jeff Eastman
That makes the math come out a lot closer (3*423763=1271289). I've also
noticed there is some delay in reclaiming unused blocks so what you are
seeing in terms of block allocations do not surprise me.

> -Original Message-
> From: André Martin [mailto:[EMAIL PROTECTED]
> Sent: Friday, March 21, 2008 2:36 PM
> To: core-user@hadoop.apache.org
> Subject: Re: Performance / cluster scaling question
> 
> 3 - the default one...
> 
> Jeff Eastman wrote:
> > What's your replication factor?
> > Jeff
> >
> >
> >> -Original Message-
> >> From: André Martin [mailto:[EMAIL PROTECTED]
> >> Sent: Friday, March 21, 2008 2:25 PM
> >> To: core-user@hadoop.apache.org
> >> Subject: Performance / cluster scaling question
> >>
> >> Hi everyone,
> >> I ran a distributed system that consists of 50 spiders/crawlers and 8
> >> server nodes with a Hadoop DFS cluster with 8 datanodes and a
> namenode...
> >> Each spider has 5 job processing / data crawling threads and puts
> >> crawled data as one complete file onto the DFS - additionally there are
> >> splits created for each server node that are put as files onto the DFS
> >> as well. So basically there are 50*5*9 = ~2250 concurrent writes across
> >> 8 datanodes.
> >> The splits are read by the server nodes and will be deleted afterwards,
> >> so those (split)-files exists for only a few seconds to minutes...
> >> Since 99% of the files have a size of less than 64 MB (the default
> block
> >> size) I expected that the number of files is roughly equal to the
> number
> >> of blocks. After running the system for 24hours the namenode WebUI
> shows
> >> 423763 files and directories and 1480735 blocks. It looks like that the
> >> system does not catch up with deleting all the invalidated blocks - my
> >> assumption?!?
> >> Also, I noticed that the overall performance of the cluster goes down
> >> (see attached image).
> >> There are a bunch of Could not get block locations. Aborting...
> >> exceptions and those exceptions seem to appear more frequently towards
> >> the end of the experiment.
> >>
> >>> java.io.IOException: Could not get block locations. Aborting...
> >>> at
> >>>
> >>>
> >>
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSCl
> >> ient.java:1824)
> >>
> >>> at
> >>>
> >>>
> >>
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1100(DFSClient.java
> >> :1479)
> >>
> >>> at
> >>>
> >>>
> >>
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient
> >> .java:1571)
> >> So, is the cluster simply saturated with the such a frequent creation
> >> and deletion of files, or is the network that actual bottleneck? The
> >> work load does not change at all during the whole experiment.
> >> On cluster side I see lots of the following exceptions:
> >>
>= >>> 2008-03-21 20:28:05,411 INFO org.apache.hadoop.dfs.DataNode:
> >>> PacketResponder 1 for block blk_6757062148746339382 terminating
> >>> 2008-03-21 20:28:05,411 INFO org.apache.hadoop.dfs.DataNode:
> >>> writeBlock blk_6757062148746339382 received exception
> >>>
> >> java.io.EOFException
> >>
> >>> 2008-03-21 20:28:05,411 ERROR org.apache.hadoop.dfs.DataNode:
> >>> 141.xxx..xxx.xxx:50010:DataXceiver: java.io.EOFException
> >>> at java.io.DataInputStream.readInt(Unknown Source)
> >>> at
> >>>
> >>>
> >>
> org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:22
> >> 63)
> >>
> >>> at
> >>>
> >>>
> >>
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1150)
> >>
> >>> at
> org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:938)
> >>> at java.lang.Thread.run(Unknown Source)
> >>> 2008-03-21 19:26:46,535 INFO org.apache.hadoop.dfs.DataNode:
> >>> writeBlock blk_-7369396710977076579 received exception
> >>> java.net.SocketException: Connection reset
> >>> 2008-03-21 19:26:46,535 ERROR org.apache.hadoop.dfs.DataNode:
> >>> 141.xxx.xxx.xxx:50010:DataXceiver: java.net.SocketException:
> >>> Connection reset
> >>> at java.net.SocketInputStream.read(Unknown Source)
>

Re: Performance / cluster scaling question

2008-03-21 Thread André Martin

3 - the default one...

Jeff Eastman wrote:
What's your replication factor? 
Jeff


  

-Original Message-
From: André Martin [mailto:[EMAIL PROTECTED]
Sent: Friday, March 21, 2008 2:25 PM
To: core-user@hadoop.apache.org
Subject: Performance / cluster scaling question

Hi everyone,
I ran a distributed system that consists of 50 spiders/crawlers and 8
server nodes with a Hadoop DFS cluster with 8 datanodes and a namenode...
Each spider has 5 job processing / data crawling threads and puts
crawled data as one complete file onto the DFS - additionally there are
splits created for each server node that are put as files onto the DFS
as well. So basically there are 50*5*9 = ~2250 concurrent writes across
8 datanodes.
The splits are read by the server nodes and will be deleted afterwards,
so those (split)-files exists for only a few seconds to minutes...
Since 99% of the files have a size of less than 64 MB (the default block
size) I expected that the number of files is roughly equal to the number
of blocks. After running the system for 24hours the namenode WebUI shows
423763 files and directories and 1480735 blocks. It looks like that the
system does not catch up with deleting all the invalidated blocks - my
assumption?!?
Also, I noticed that the overall performance of the cluster goes down
(see attached image).
There are a bunch of Could not get block locations. Aborting...
exceptions and those exceptions seem to appear more frequently towards
the end of the experiment.


java.io.IOException: Could not get block locations. Aborting...
at

  

org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSCl
ient.java:1824)


at

  

org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1100(DFSClient.java
:1479)


at

  

org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient
.java:1571)
So, is the cluster simply saturated with the such a frequent creation
and deletion of files, or is the network that actual bottleneck? The
work load does not change at all during the whole experiment.
On cluster side I see lots of the following exceptions:


2008-03-21 20:28:05,411 INFO org.apache.hadoop.dfs.DataNode:
PacketResponder 1 for block blk_6757062148746339382 terminating
2008-03-21 20:28:05,411 INFO org.apache.hadoop.dfs.DataNode:
writeBlock blk_6757062148746339382 received exception
  

java.io.EOFException


2008-03-21 20:28:05,411 ERROR org.apache.hadoop.dfs.DataNode:
141.xxx..xxx.xxx:50010:DataXceiver: java.io.EOFException
at java.io.DataInputStream.readInt(Unknown Source)
at

  

org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:22
63)


at

  

org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1150)


at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:938)
at java.lang.Thread.run(Unknown Source)
2008-03-21 19:26:46,535 INFO org.apache.hadoop.dfs.DataNode:
writeBlock blk_-7369396710977076579 received exception
java.net.SocketException: Connection reset
2008-03-21 19:26:46,535 ERROR org.apache.hadoop.dfs.DataNode:
141.xxx.xxx.xxx:50010:DataXceiver: java.net.SocketException:
Connection reset
at java.net.SocketInputStream.read(Unknown Source)
at java.io.BufferedInputStream.fill(Unknown Source)
at java.io.BufferedInputStream.read(Unknown Source)
at java.io.DataInputStream.readInt(Unknown Source)
at

  

org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:22
63)


at

  

org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1150)


at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:938)
at java.lang.Thread.run(Unknown Source)
  

I'm running Hadoop 0.16.1 - Has anyone made the same or a similar
experience.
How can the performance degradation be avoided? More datanodes? Why
seems the block deletion not to catch up with the deletion of the file?
Thanks in advance for your insights, ideas & suggestions :-)

Cu on the 'net,
Bye - bye,

   < André   èrbnA >



RE: Performance / cluster scaling question

2008-03-21 Thread Jeff Eastman
What's your replication factor? 
Jeff

> -Original Message-
> From: André Martin [mailto:[EMAIL PROTECTED]
> Sent: Friday, March 21, 2008 2:25 PM
> To: core-user@hadoop.apache.org
> Subject: Performance / cluster scaling question
> 
> Hi everyone,
> I ran a distributed system that consists of 50 spiders/crawlers and 8
> server nodes with a Hadoop DFS cluster with 8 datanodes and a namenode...
> Each spider has 5 job processing / data crawling threads and puts
> crawled data as one complete file onto the DFS - additionally there are
> splits created for each server node that are put as files onto the DFS
> as well. So basically there are 50*5*9 = ~2250 concurrent writes across
> 8 datanodes.
> The splits are read by the server nodes and will be deleted afterwards,
> so those (split)-files exists for only a few seconds to minutes...
> Since 99% of the files have a size of less than 64 MB (the default block
> size) I expected that the number of files is roughly equal to the number
> of blocks. After running the system for 24hours the namenode WebUI shows
> 423763 files and directories and 1480735 blocks. It looks like that the
> system does not catch up with deleting all the invalidated blocks - my
> assumption?!?
> Also, I noticed that the overall performance of the cluster goes down
> (see attached image).
> There are a bunch of Could not get block locations. Aborting...
> exceptions and those exceptions seem to appear more frequently towards
> the end of the experiment.
> > java.io.IOException: Could not get block locations. Aborting...
> > at
> >
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSCl
> ient.java:1824)
> > at
> >
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1100(DFSClient.java
> :1479)
> > at
> >
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient
> .java:1571)
> So, is the cluster simply saturated with the such a frequent creation
> and deletion of files, or is the network that actual bottleneck? The
> work load does not change at all during the whole experiment.
> On cluster side I see lots of the following exceptions:
> > 2008-03-21 20:28:05,411 INFO org.apache.hadoop.dfs.DataNode:
> > PacketResponder 1 for block blk_6757062148746339382 terminating
> > 2008-03-21 20:28:05,411 INFO org.apache.hadoop.dfs.DataNode:
> > writeBlock blk_6757062148746339382 received exception
> java.io.EOFException
> > 2008-03-21 20:28:05,411 ERROR org.apache.hadoop.dfs.DataNode:
> > 141.xxx..xxx.xxx:50010:DataXceiver: java.io.EOFException
> > at java.io.DataInputStream.readInt(Unknown Source)
> > at
> >
> org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:22
> 63)
> > at
> >
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1150)
> > at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:938)
> > at java.lang.Thread.run(Unknown Source)
> > 2008-03-21 19:26:46,535 INFO org.apache.hadoop.dfs.DataNode:
> > writeBlock blk_-7369396710977076579 received exception
> > java.net.SocketException: Connection reset
> > 2008-03-21 19:26:46,535 ERROR org.apache.hadoop.dfs.DataNode:
> > 141.xxx.xxx.xxx:50010:DataXceiver: java.net.SocketException:
> > Connection reset
> > at java.net.SocketInputStream.read(Unknown Source)
> > at java.io.BufferedInputStream.fill(Unknown Source)
> > at java.io.BufferedInputStream.read(Unknown Source)
> > at java.io.DataInputStream.readInt(Unknown Source)
> > at
> >
> org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:22
> 63)
> > at
> >
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1150)
> > at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:938)
> > at java.lang.Thread.run(Unknown Source)
> I'm running Hadoop 0.16.1 - Has anyone made the same or a similar
> experience.
> How can the performance degradation be avoided? More datanodes? Why
> seems the block deletion not to catch up with the deletion of the file?
> Thanks in advance for your insights, ideas & suggestions :-)
> 
> Cu on the 'net,
> Bye - bye,
> 
>< André   èrbnA >





Re: Performance / cluster scaling question

2008-03-21 Thread André Martin
Attached image can be found here: 
http://www.andremartin.de/Performance-degradation.png