Andrew (Anything) wrote: > Ive been testing using bonnie++ -n 50:1024:0:10 -s 0. Is this a bad way to > test? > Some raw results follow later in case you want them. > > Obviously ocfs2 should be slower than ext3. > But I guess I expected a single node ocfs node to be only doing internal > stuff with kernel and dlm at really fast cpu speeds, and its only bottleneck > to be writing to the disk. For it to be so slow it must be doing heaps of > disk stuff instead? > > Had tried a few dd tests however oflag=direct seems to cause an instant > kernel panic, I don't know if I am to trust dd's results without directio. > > Andy.. > > > ext3, noatime, > bonnie++ -d /mnt/temp/ -n 50:1024:0:10 -s 0 > Version 1.03e ------Sequential Create------ --------Random > Create-------- > wombat -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- > files:max /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec > %CP > 50:1024:0/10 10838 43 +++++ +++ 20386 50 7147 28 +++++ +++ 17248 > 44 > > > ocfs2, -T mail max-features, noatime,data=writeback, > bonnie++ -d /mnt/temp/ -n 50:1024:0:10 -s 0 > Version 1.03e ------Sequential Create------ --------Random > Create-------- > wombat -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- > files:max /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec > %CP > 50:1024:0/10 1429 53 10849 32 1224 8 1354 51 205 2 292 > 4 > > I ran both a few times just in case.
Yes, it is doing heaps more disk io compared to ext3 simply due to the fact the ext3's inode is 128 bytes whereas ocfs2's is 1 block. So choosing a smaller block size will improve create performance. However, that is not recommended because smaller block sizes will negatively affect the r/w performance. Have you tried running bonnie on multiple nodes concurrently? The create performance will scale up to the limit of your io subsystem. _______________________________________________ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users