020 11:28:17 AM GMT+02:00, "Goorkate, B.J."
> >There is just one unsynced entry which comes back every time: the
> >dom_md/ids file.
> >When I add/delete a VM it's gone for a short while, but then it
-upgraded node, just to be
sure. Luckily we have the hardware for it.
We're migrating from 4.2.8 (I think) to 126.96.36.199.
On Thu, Feb 06, 2020 at 12:43:48PM +0200, Strahil Nikolov wrote:
> On February 6, 2020 9:42:13 AM GMT+02:00, "Goorkate, B.J."
>В вторник, 4 февруари 2020 г., 16:35:27 ч. Гринуич+2, Goorkate, B.J.
>Thanks for your time so far!
>The packages seem fine on all of the 3 nodes.
>Only /var/lib/glusterd/glusterd.info is modified and on t
Thanks for your time so far!
The packages seem fine on all of the 3 nodes.
Only /var/lib/glusterd/glusterd.info is modified and on the not yet upgraded
nodes these files are missing:
find a way
to ensure that the full heal really started, let alone if it finished
On Mon, Jan 27, 2020 at 08:11:14PM +0200, Strahil Nikolov wrote:
> On January 27, 2020 4:17:26 PM GMT+02:00, "Goorkate, B.J."
> >Hi all,
I'm in the process of upgrading oVirt-nodes from 4.2 to 4.3.
After upgrading the first of 3 oVirt/gluster nodes, there are between 600-1200
unsynced entries for a week now on 1 upgraded node and one not-yet-upgraded
node. The third node (also not-yet-upgraded) says it's OK (no
t 5:22 PM, Goorkate, B.J. <b.j.goork...@umcutrecht.nl>
> Hi all,
> When creating a gluster brick in oVirt, I have to fill in the parameters
> RAID volume the brick is on (that's how I understand it anyway):
When creating a gluster brick in oVirt, I have to fill in the parameters of the
RAID volume the brick is on (that's how I understand it anyway):
RAID-type, number of disks and stripe size.
What is the reason for that? Is the gluster brick optimised for this
tried to find
Thanks for the answer!
Adding nodes per 3 makes sense. The disadvantage is having multiple
storage domains when you do that. Or is it possible to combine them?
On Fri, Jan 20, 2017 at 11:27:28AM +0530, knarra wrote:
> On 01/19/2017 09:15 PM, Goorkate, B.J. wrote:
I have an oVirt environment with 5 nodes. 3 nodes offer a replica-3 gluster
storage domain for the virtual
Is there a way to use storage in the nodes which are no member of the replica-3
Or do I need another node and make a second replica-3 gluster storage
ploying. This should get you the 4 hosts online, but the
> engine will run only on the first 3
> Right. You can add the 4th node to the cluster, but not have any bricks on
> volume in which case VMs will be run on this node but will access data from
I've tried to find a way to add a 4th oVirt-node to my existing
3-node setup with replica-3 gluster storage, but found no usable
>From what I read, it's not wise to create a replica-4 gluster
storage, because of bandwith overhead.
Is there a safe way to do this and
> All VMs will have to be down to do this.
> On 08/31/2016 08:39 AM, Goorkate, B.J. wrote:
> > Hi all,
> > I recently moved my gluster back end to a dedicated gluster network.
> > Now I would like to change the mou
I recently moved my gluster back end to a dedicated gluster network.
Now I would like to change the mount point of my master Storage Domain
to point to that network too. Is that possible?
I tried to detach the Storage Domain from the datacenter in order to
re-add or import it with the
Mail list logo