I've just installed a new server for a client and all was going well
until the client stated using his existing data which we had copied to a
group share ... its a slug. This is particularly noticable when doing a
backup, copy is bad but it is even worse during the compare phase.
Originally I thought there was a network or hardware problem, but through
a process of elimination (inc swapping the server) I conclude its SAMBA
or possibly the way the user has structured his data.
Having a closer look at the data, it would appear that the user has some
directories which contain > 1300 small files.
Results from this afternoon were 24.2 Mb/minute copy, 5.6 Mb/minute
compare to a file based storage set. The test configuration consisted of
the server (Qube2) and the backup workstation (Pentium III 550) connected
to a 100baseT switch ... both machines were 100Mbps Full Duplex. I don't
consider these figures to be anything to boast about.
Anyone had similar experiences where the hierarchy of the data has a
dramatic effect on performance?
I have used this configuration on other sites without such problems ...
the only significant difference is the clients data. I guess the best
solution is to suggest to the client that he structure his data a little
better than stuff everything in a single folder.
I would be interested to know why the performance of the compare phase is
so bad. Restore is equally as bad, but I guess that is to be expected as
I gather compare is a restore without the actual "write to target".
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Information Alchemy Pty Ltd
ACN 089 239 305
Malcolm McLeary Mobile: 0412 636 086
Managing Director Email: [EMAIL PROTECTED]
This message was sent using Claris Emailer 2.0v3 for Macintosh.
To subscribe: [EMAIL PROTECTED]
To unsubscribe: [EMAIL PROTECTED]
Problems?: [EMAIL PROTECTED]