*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Thu, May 19, 2016 at 7:25 PM, Kevin Lemonnier
wrote:
> The I/O errors are happening after, not during the heal.
> As described, I just rebooted a node, waited for the heal to finish,
>
The I/O errors are happening after, not during the heal.
As described, I just rebooted a node, waited for the heal to finish,
rebooted another, waited for the heal to finish then rebooted the third.
From that point, the VM just has a lot of I/O errors showing whenever I
use the disk a lot
Hi,
A disperse volume was configured on servers with limited network
bandwidth. Some of the read operations failed with error
[2016-05-16 18:38:36.035559] E [MSGID: 122034]
[ec-common.c:461:ec_child_select] 0-SDSStoragePool-disperse-2: Insufficient
available childs for this request
I am slightly confused you say you have image file corruption but then you
say the qemu-img check says there is no corruption. If what you mean is
that you see I/O errors during a heal this is likely to be due to io
starvation, something that is a well know issue.
There is work happening to
Le 19/05/2016 17:33, Jesper Led Lauridsen TS Infra server a écrit :
If you want to locate it, this is how you do it :
1) ls -i
/bricks/brick1/glu_rhevtst_dr2_data_01/.glusterfs/32/5c/325ccd9f-a7f1-4ad0-bfc8-6d4b73930b9f
on glustoretst02.net.dr.dk
2) find the inode number that you will get
I tried posting this to ovirt-users list but got no response so I'll try
here too.
I just setup a new ovirt cluster with gluster & nfs data domains.
VMs on the NFS domain startup with no issues.
VMs on the gluster domains complain of "Permission denied" on startup.
2016-05-17 14:14:51,959
>
> Fra: Anuradha Talur [ata...@redhat.com]
> Sendt: 19. maj 2016 14:59
> Til: Jesper Led Lauridsen TS Infra server
> Cc: gluster-users@gluster.org
> Emne: Re: [Gluster-users] heal info report a gfid
>
> - Original Message -
> > From: "Jesper
Le 19/05/2016 12:06, Anuradha Talur a écrit :
- Original Message -
From: "Atin Mukherjee"
To: "CHEVALIER Pierre" , gluster-users@gluster.org
Cc: "Pranith Kumar Karampuri" , "Ravishankar N"
,
Il 18 mag 2016 19:31, "Kevin Lemonnier" ha scritto:
> Seems like a non issue, you are planning in using replica right ?
Yes but what if in case of a gluster bug?
Replica protect against hardware failure but also software could fail.
What if sharding algorithm would be
On 19/05/2016 12:17 AM, Lindsay Mathieson wrote:
One thought - since the VM's are active while the brick is
removed/re-added, could it be the shards that are written while the
brick is added that are the reverse healing shards?
I tested by:
- removing brick 3
- erasing brick 3
- closing
- Original Message -
> From: "CHEVALIER Pierre"
> To: "Anuradha Talur"
> Cc: gluster-users@gluster.org, "Pranith Kumar Karampuri"
> , "Ravishankar N"
> , "Atin Mukherjee"
>
- Original Message -
> From: "Jesper Led Lauridsen TS Infra server"
> To: gluster-users@gluster.org
> Sent: Thursday, May 19, 2016 2:49:33 PM
> Subject: [Gluster-users] heal info report a gfid
>
> Hi,
>
> I have a replicated volume where "gluster volume heal info" reports
Still not work.
I need copy /var/lib/glusterd/geo-replication/secret.* to /root/.ssh/id_rsa to
make passwordless ssh work.
I generate /var/lib/glusterd/geo-replication/secret.pem file on every master
nodes.
I am not sure is this right.
[root@sh02svr5956 ~]# gluster volume
- Original Message -
> From: "Atin Mukherjee"
> To: "CHEVALIER Pierre" , gluster-users@gluster.org
> Cc: "Pranith Kumar Karampuri" , "Ravishankar N"
> , "Anuradha Talur"
>
>
Hi,
I have a replicated volume where "gluster volume heal info" reports a
GFID only on one of the bricks.
The GFID referees to this file, but I can't locate the file on the brick
located on glustertst01 or on a mounted volume
File =
Hi,
Could you just try 'create force' once to fix those status file errors?
e.g., 'gluster volume geo-rep :: create
push-pem force
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "vyyy杨雨阳"
> To: "Saravanakumar Arumugam" ,
>
I have checked all the nodes both on masters and slaves, the software is the
same.
I am puzzled why there were half masters work, halt faulty.
[admin@SVR6996HW2285 ~]$ rpm -qa |grep gluster
glusterfs-api-3.6.3-1.el6.x86_64
glusterfs-fuse-3.6.3-1.el6.x86_64
Hi,
+geo-rep team.
Can you get the gluster version you are using?
# For example:
rpm -qa | grep gluster
I hope you have same gluster version installed everywhere.
Please double check and share the same.
Thanks,
Saravana
On 05/19/2016 01:37 PM, vyyy杨雨阳 wrote:
Hi, Saravana
I have changed
OMG I was trying to "tar xf gcc-6.1.0.tar.bz2" as an "example source
tarball" to demonstrate the bug but the untar already took me an hour...
On 5/19/2016 3:01 PM, Chen Chen wrote:
Hi everyone,
I observed "make" failed with "No targets. Stop." on Gluster NFS.
Every sequencing "make" command
Hi, Saravana
I have changed log level to DEBUG. Then start geo-replication with log-file
option, attached the file.
gluster volume geo-replication filews
glusterfs01.sh3.ctripcorp.com::filews_slave start --log-file=geo.log
I have checked /root/.ssh/authorized_keys in
Hi everyone,
I observed "make" failed with "No targets. Stop." on Gluster NFS.
Every sequencing "make" command will progress through one or more
subdirectories, and finally stop randomly. When I move the BUILDDIR to a
local disk, make success without error.
I suppose some performance
On Thu, May 19, 2016 at 11:42 AM, Raghavendra Talur wrote:
>
>
> On Thu, May 19, 2016 at 11:39 AM, Kaushal M wrote:
>>
>> On Thu, May 19, 2016 at 11:35 AM, Kaushal M wrote:
>> > On Thu, May 19, 2016 at 11:29 AM, Raghavendra Talur
On Thu, May 19, 2016 at 11:39 AM, Kaushal M wrote:
> On Thu, May 19, 2016 at 11:35 AM, Kaushal M wrote:
> > On Thu, May 19, 2016 at 11:29 AM, Raghavendra Talur
> wrote:
> >>
> >>
> >> On Thu, May 19, 2016 at 11:13 AM, Kaushal M
On Thu, May 19, 2016 at 11:35 AM, Kaushal M wrote:
> On Thu, May 19, 2016 at 11:29 AM, Raghavendra Talur wrote:
>>
>>
>> On Thu, May 19, 2016 at 11:13 AM, Kaushal M wrote:
>>>
>>> I'm in favour of a stable release every 2 months and
On Thu, May 19, 2016 at 11:29 AM, Raghavendra Talur wrote:
>
>
> On Thu, May 19, 2016 at 11:13 AM, Kaushal M wrote:
>>
>> I'm in favour of a stable release every 2 months and an LTS once a
>> year (option 2).
>
>
> +1
>
>>
>>
>> As Oleksander already
On Thu, May 19, 2016 at 11:13 AM, Kaushal M wrote:
> I'm in favour of a stable release every 2 months and an LTS once a
> year (option 2).
>
+1
>
> As Oleksander already suggested, I'm in favour of having well defined
> merge windows, freeze dates and testing period.
> (A
26 matches
Mail list logo