On Mon, Jan 14, 2013 at 11:02 AM, Leander Bessa Beernaert 
<[email protected]<mailto:[email protected]>> wrote:
Hello all,


I'm trying to upload 200GB of 200KB files to Swift. I'm using 4 clients (each 
hosted on a different machine) with 10 threads each uploading files using the 
official python-swiftclient. Each thread is uploading to a separate container.

I have 5 storage nodes and 1 proxy node. The nodes are all running with a 
replication factor of 3. Each node has a quad-core i3 processor, 4GB of RAM and 
a gigabit network interface.

Is there any way I can speed up this process? At the moment it takes about 20 
seconds per file or more.


It is very likely the system is starved on IO's.
As a temporary workaround you can stop the object-replicator and object-auditor 
during the import to have less daemons competing for IO's.

Some general troubleshooting tips:
Use iotop to look for the processes consuming io's

Assuming you use XFS:
Make sure the filesystem is created with the appropriate inode size as 
described in the docs.
(e.g. mkfs.xfs -i size=1024)

Also with lots of files you need quite a bit of memory to cache the inodes into 
memory.
Use the xfs runtime stats to get some indication about the cache:
http://xfs.org/index.php/Runtime_Stats
xs_dir_lookup and xs_ig_missed will give some indication how much IO's are 
spend on the inode lookups

You can look at slabtop to see how much memory is used by the inode cache.

Cheers,
Robert

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to