Tanton Gibbs wrote:

>> a machine with 4 cpu and 3gb of memory is definitely hard wired to crunch
>> number and boost i/o performance. not accounting for machine load, there
> is
>> very little reason your Perl script will run slower on a powerful
>> machine.
> 
> I've found this to be not true most of the time.  The larger machines
> usually come with slower CPUs for one, but they also come with SANs which
> are optimized for heavy load, random access IO.  Since he was doing
> sequential IO, the odds are that the larger computer ran slower.  This has
> been verified by many different companies.  Small linux boxes with fast
> hard
> drives will almost always outperform the larger, fiber SANs.  In fact,
> google found this to be true and stocks its servers with traditional hard
> drives instead of using a SAN storage device like what is probably
> attached to your larger computer.
> 
> Tanton

i believe your statement is bit too far reaching. i don't work for google so 
i don't know exactly why they would stock their file system(flat file base 
you mean? or rdbms?) with traditional hard drives. if they believe that 
way, their disk i/o will be faster and performance outweight over factor of 
of a secure i/o system, i think they just put themselves in a dangerous 
situattion.

as i said before, there are a lot of reasons why a Perl script runs slower 
on a more powerful machine. however, i do believe a machine that the 
original poster mention should run faster for a number of reason:

1. disk i/o (any kind, random access i/o or sequential i/o) relies on how 
fast, how often and how long the r/w head has to move. a server designed 
for i/o will usually have high speed rotation for it's r/w head. 
traditional hard drive can almost never match this speed. however, depends 
on how often and how much distance the r/w has to travel, a traditional 
hard drive can actually outperform the high speed r/w head for a small 
file. the reason for this is that if the file is small, the high speech r/w 
head will not have enought time to accelerate. i believe that one of the 
primary reason why many uers believe that i/o is slower on a high end 
server compare to their low end computer is simply because the file is 
small. on IBM's website, i believe they have an excellent article to 
explain all of those.

2. data path, internal buffer, disk/controller/bus speech are also much more 
efficient in a high end server. again, a traditonal hard drive can almost 
never match the performance of this design.

a script that runs 3 minutes in a pc with 6 minutes in a server is a 50% 
performance degrade. i will be happy to see if the op can run the test 
several times to confirm this.

david

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to