Hello,
how can I delete sharded disk which had been broken during snapshot
deleting ( node crashed ) ?
I delete main VM, but disk still persists and gluster volume has any
files inconsistency.
How to find shard files attached to that disk and delete them all ??
regards
Paf1
Hello,
can anybody help with gluster performace ?
I'm running gluster replica 3 arbiter 1 ( 2+1) with following version
OS Version:RHEL - 7 - 3.1611.el7.centos
OS Description:CentOS Linux 7 (Core)
Kernel Version:3.10.0 - 514.6.1.el7.x86_64
KVM Version:2.6.0 - 28.el7_3.3.1
LIBVIRT
Hello,
how can I migrate VM between two different clusters with gluster FS ?
( 3.5 x 4.0 )
They have different ovirt mgmt.
regards
Paf1
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
Hello dears,
how can i share the arbiter node between two-three gluster clusters ??
I've got two clusters ( centos 7.2) with gluster (3.8) filesystem and
I'd need to share arbiter node between them to spare server nodes.
exam:
gluster volume create SDAP1 replica 3 arbiter 1
Hello Andreas,
we are testing stripe 2 replica 3 arbiter 1 gluster volume, it's running, but not supported by
Ovirt 3.5.6.2 ( can't create 'storage" , with "unsupported" message )
This is running in 5 node gluster. Node 5 dedicated only for arbiter bricks and has no
"big disks".
exam:
gluster
OK,
will extend replica 2 to replica 3 ( arbiter ) ASAP .
If is deleted "untouching" ids file on brick , healing of this file
doesn't work .
regs.Pa.
On 3.3.2016 12:19, Nir Soffer wrote:
On Thu, Mar 3, 2016 at 11:23 AM, p...@email.cz <mailto:p...@email.cz>
<p...
This is replica 2, only , with following settings
Options Reconfigured:
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: fixed
cluster.server-quorum-type:
4ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
= missing "shadowfile" in " .gluster " dir.
How can I fix it ?? - online !
On 2.3.2016 08:16, Ravishankar N wrote:
On 03/02/2016 12:02 PM, Sahina Bose wrote:
On 03/02/2016 03:45 AM, Nir Soffer wrote:
On Tue, Mar 1, 2016 at 10:51
Yes we have had "ids" split brains + some other VM's files
Split brains was fixed by healing with preffered ( source ) brick.
eg: " # gluster volume heal 1KVM12-P1 split-brain source-brick
16.0.0.161:/STORAGES/g1r5p1/GFS "
Pavel
Okay, so what I understand from the output above is you have
/2016 12:02 PM, Sahina Bose wrote:
On 03/02/2016 03:45 AM, Nir Soffer wrote:
On Tue, Mar 1, 2016 at 10:51 PM, p...@email.cz <p...@email.cz> wrote:
>
> HI,
> requested output:
>
> # ls -lh /rhev/data-center/mnt/glusterSD/localhost:*/*/dom_md
>
>
/rhev/data-center
on again,
is it safe to do it online ?? ( YES / NO )
regs.
Pavel
On 1.3.2016 18:38, Nir Soffer wrote:
On Tue, Mar 1, 2016 at 5:07 PM, p...@email.cz <mailto:p...@email.cz>
<p...@email.cz <mailto:p...@email.cz>> wrote:
Hello, can anybody explain this error no.13
Hello, can anybody explain this error no.13 ( open file ) in sanlock.log .
The size of "ids" file is zero (0)
2016-02-28 03:25:46+0100 269626 [1951]: open error -13
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids
2016-02-28 03:25:46+0100
please let me know which build of gluster are you using ?
For more info please read,
http://www.gluster.org/pipermail/gluster-users.old/2015-June/022377.html
- (please ignore this line)
Thanks
kasturi
On 11/27/2015 10:52 AM, Sahina Bose wrote:
[+ gluster-users]
On 11/26/2015 0
Hello,
would U recommend me set of functional params for volume replica 2 ?
Old ones was ( for 3.5.2 gluster version )
storage.owner-uid 36
storage.owner-gid 36
performance.io-cache off
performance.read-ahead off
network.remote-dio enable
cluster.eager-lock enable
performance.stat-prefetch
-management:
nfs has disconnected from glusterd.
regs.
Pa.
On 11.8.2015 06:21, Atin Mukherjee wrote:
On 08/11/2015 02:45 AM, p...@email.cz wrote:
After reboot / reset server we have no new records in gluster logs (
/var/log/glusterfs ) - any and glusterd doesn't start automaticaly
Hello Dears,
can anybody explain advanteges / disadvantages of Ganesha NFS ??
Will U reccomend me go through this way ??
( 4 node glusterFS )
regs.
Pavel
___
Gluster-users mailing list
Gluster-users@gluster.org
Hello everybody,
I've got long time issue with my glusterd daemon.
I'm using Centos 7.1 with last updates and gluster packages from 3.7.3-1
version on hypervizor servers.
glusterfs-3.7.3-1.el7.x86_64
glusterfs-cli-3.7.3-1.el7.x86_64
glusterfs-geo-replication-3.7.3-1.el7.x86_64
Hi,
:-) of course, cluster daemon is started automaticaly by systemd .
Pa.
On 10.8.2015 19:43, SATHEESARAN wrote:
On 08/10/2015 08:07 PM, p...@email.cz wrote:
Hello everybody,
I've got long time issue with my glusterd daemon.
I'm using Centos 7.1 with last updates and gluster packages from
state.
Pa.
On 10.8.2015 18:21, Niels de Vos wrote:
On Mon, Aug 10, 2015 at 04:37:19PM +0200, p...@email.cz wrote:
Hello everybody,
I've got long time issue with my glusterd daemon.
I'm using Centos 7.1 with last updates and gluster packages from 3.7.3-1
version on hypervizor servers.
glusterfs
Hello ,
split brain happened a few hours before, would you define which copy is
the newest ??
# gluster volume heal 1KVM12_P3 info
Brick 1kvm1:/STORAGES/g1r5p3/GFS/
/7e5ca629-5e97-4220-a6b2-b93242e8f314/dom_md/ids - Is in split-brain
Number of entries: 1
Brick 1kvm2:/STORAGES/g1r5p3/GFS/
Hello,
we have big problem with metadata IO error and swap issue
with:
OS =RHEL - 7 - 1.1503.el7.centos.2.8
kernel = 3.10.0 - 229.4.2.el7.x86_64
KVM Version = 2.1.2 - 23.el7_1.3.1
LIBVIRT = libvirt-1.2.8-16.el7_1.3
gluster =
hello,
pls , how to eliminate this split brain on
- centos 7.1
- glusterfs-3.7.1-1.el7.x86_64
# *gluster volume heal R2 info*
Brick cl1:/R2/R2/
*/__DIRECT_IO_TEST__ - Is in split-brain*
Number of entries: 1
Brick cl3:/R2/R2/
*/__DIRECT_IO_TEST__ - Is in split-brain*
Number of entries: 1
22 matches
Mail list logo