On Wed, 5 Nov 2014 05:34:04 PM Eneko Lacunza wrote:
> > Overall, I seemed to get similar i/o to what I was getting with
> > gluster, when I implemented a SSD cache for it (EXT4 with SSD
> > Journal). However ceph seemed to cope better with high loads, with one
> > of my stress tests - starting 7 vm's simultaneously, gluster seemed to
> > fail, with some of the VM's reporting I/O errors and crashing.
> > 
> > Whereas with ceph, they were very slow  but all started normally.
> > 
> 
> Thanks for sharing, I haven't used glusterfs but knowing about those I/O 
> errors is interesting.

More feedback - after test ceph a lot and feedback from the ceph user list I 
concluded that my use cases were not a good fit to ceph. To small basically, 
only two osd's, which impacted performance and complicated to manage.

I revisited gluster, this time formatted the filesystem per recommendations 
(difficult to find). This seemed to resolved the I/O problems, haven't been 
able to recreate them no matter what the load.

-- 
Lindsay

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
pve-user mailing list
[email protected]
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to