On 29/10/14 05:43, Lindsay Mathieson wrote:
Sorry to keep coming back to this :( but we're adding a 3rd node to
our cluster which brings ceph back into the picture ...
Is there a particular reason that ceph is preferred to glusterfs?
better performance? more fault tolerant?
Ceph provides block
Hi Angel,
On 29/10/14 09:25, Angel Docampo wrote:
Bonded interfaces on linux are active-backup, so you have a 1Gb
connexion on the storage side. Consider to upgrade to a faster
ethernet/fiberchannel/infiniband.
I don't think this is the case. You can configure many modes on bonded
interfaces;
On Wed, 29 Oct 2014 09:06:44 AM Eneko Lacunza wrote:
haven't deployed glusterfs myself. I think you can put
CTs/ISOs/backups on glusterfs but not in ceph-rbd
Yes, I think its images only, not a problem for this exercise
(maybe yes on cephfs).
Is that done via a manual mount with proxmox
On Wed, 29 Oct 2014 09:25:08 AM Angel Docampo wrote:
Ceph provides block storage while Gluster doesn't, but the latest it's far
easier to setup. As block storage, Ceph is faster than Gluster, but I have
all my proxmox virtual environment with gluster running perfectly.
Limiting factor will be
One big advantage glusterfs has here is the intermediate filesystem. If
things go
totally pear shaped you can just pull one of the replica drives and copy the
files
off it. Difficult if not impossible to do that with ceph.
One big dis-advantage of glusterfs is behavior after node failure.
Hi,
On 29/10/14 10:39, Lindsay Mathieson wrote:
(maybe yes on cephfs).
Is that done via a manual mount with proxmox treating it as a directory?
I think so, but never tried cephfs myself.
Cheers
Eneko
--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
On Wed, 29 Oct 2014 09:45:39 AM Dietmar Maurer wrote:
One big dis-advantage of glusterfs is behavior after node failure. Seems
glusterfs re-read and compare ALL data when the other node comes up again.
This produces much overhead and is very slow.
That is a big issue. I guess its an effect of
Hi Angel,
On 29/10/14 10:50, Angel Docampo wrote:
On 29/10/14 09:25, Angel Docampo wrote:
Bonded interfaces on linux are active-backup, so you have a 1Gb
connexion on the storage side. Consider to upgrade to a faster
ethernet/fiberchannel/infiniband.
I don't think this is the case. You can
Hi all,
It's been a while and I have been doing tests in the background.
I found some issues were related to faulty drives, but after removing
them migration issues continued. I had the same issues with PVE 3.2,
nfs, rbd and local storage.
I also noticed that out office production cluster