Hi Atin,
I think the root cause is in the function glusterd_import_friend_volume as
below.
int32_t
glusterd_import_friend_volume (dict_t *peer_data, size_t count)
{
...
ret = glusterd_volinfo_find (new_volinfo->volname, _volinfo);
if (0 == ret) {
(void)
Hi Atin,
Now I have known that the info and bricks/* are removed by the function
glusterd_delete_stale_volume().
But I have not known how to solve this issue.
Thanks,
Xin
在 2016-11-15 12:07:05,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 8:58 AM, songxin
Hi,
Could you please restart glusterd in DEBUG mode and share the glusterd logs?
*Starting glusterd in DEBUG mode as follows.
#glusterd -LDEBUG
*Stop the volume
#gluster vol stop
Share the glusterd logs.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Chao-Ping
Hi Atin,
I have two nodes, a node and b node, in which creating a replicate volume and
then start the volume.
I will run the script as below on b node.
#!/bin/bash
i=1
while(($i<100))
do
On Tue, Nov 15, 2016 at 8:58 AM, songxin wrote:
> Hi Atin,
> I have some clues about this issue.
> I could reproduce this issue use the scrip that mentioned in
> https://bugzilla.redhat.com/show_bug.cgi?id=1308487 .
>
I really appreciate your help in trying to nail down
Hi Atin,
I have some clues about this issue.
I could reproduce this issue use the scrip that mentioned in
https://bugzilla.redhat.com/show_bug.cgi?id=1308487 .
After I added some debug print,which like below, in glusterd-store.c and I
found that the /var/lib/glusterd/vols/xxx/info and
Hi all!
It's that time again, it's our annual community survey.
Please send this link out so that we can get better feedback from our users
+ overall community.
https://www.surveymonkey.com/r/gluster2016
Thanks!
- amye
--
Amye Scavarda | a...@redhat.com | Gluster Community Lead
Hi,
I'm using glusterfs geo-replication on version 3.7.11, one of the bricks
becomes faulty and does not replicated to slave bricks after i start
geo-replication session.
Following are the logs related to the faulty brick, can someone please
advice me on how to resolve this issue.
[2016-06-11
Il 14 nov 2016 7:28 PM, "Joe Julian" ha scritto:
>
> IMHO, if a command will result in data loss, fall it. Period.
>
> It should never be ok for a filesystem to lose data. If someone wanted to
do that with ext or xfs they would have to format.
>
Exactly. I've wrote
Though remove-brick is not an usual act we would do for Gluster volume,
this has consistently failed ending in corrupted gluster volume after
Sharding has been turned on. For bug1387878, it's very similar to what i
had encountered in ESXi world. Add-brick, would run successful, but
Features and stability are not mutually exclusive.
Sometimes instability is cured by adding a feature.
Fixing a bug is not something that's solved better by having more developers
work on it.
Sometimes fixing one bug exposed a problem elsewhere.
Using free open source community projects
IMHO, if a command will result in data loss, fall it. Period.
It should never be ok for a filesystem to lose data. If someone wanted to do
that with ext or xfs they would have to format.
On November 14, 2016 8:15:16 AM PST, Ravishankar N
wrote:
>On 11/14/2016 05:57
Hi,
Hope someone can point me how to do this.
I want to delete a volume but not able to do so because glusterfs is keep
reporting there is geo-replication setup which seems to be not exist at the
moment when I issue stop command.
On a Redhat 7.2 kernel: 3.10.0-327.36.3.el7.x86_64
1016-11-14 17:01 GMT+01:00 Vijay Bellur :
> Accessing sharded data after disabling sharding is something that we
> did not visualize as a valid use case at any point in time. Also, you
> could access the contents by enabling sharding again. Given these
> factors I think this
2016-11-14 16:55 GMT+01:00 Krutika Dhananjay :
> The only way to fix it is to have sharding be part of the graph *even* if
> disabled,
> except that in this case, its job should be confined to aggregating the
> already
> sharded files during reads but NOT shard new files that
On 11/14/2016 05:57 PM, Atin Mukherjee wrote:
This would be a straight forward thing to implement at glusterd,
anyone up for it? If not, we will take this into consideration for
GlusterD 2.0.
On Mon, Nov 14, 2016 at 10:28 AM, Mohammed Rafi K C
On Mon, Nov 14, 2016 at 8:54 AM, Niels de Vos wrote:
> On Mon, Nov 14, 2016 at 04:50:44PM +0530, Pranith Kumar Karampuri wrote:
> > On Mon, Nov 14, 2016 at 4:38 PM, Gandalf Corvotempesta <
> > gandalf.corvotempe...@gmail.com> wrote:
> >
> > > 2016-11-14 11:50 GMT+01:00 Pranith
On Mon, Nov 14, 2016 at 10:38 AM, Gandalf Corvotempesta
wrote:
> 2016-11-14 15:54 GMT+01:00 Niels de Vos :
>> Obviously this is unacceptible for versions that have sharding as a
>> functional (not experimental) feature. All supported features
Yes. I apologise for the delay.
Disabling sharding would knock the translator itself off the client stack,
and
being that sharding is the actual (and the only) translator that has the
knowledge of how to interpret sharded files, and how to aggregate them,
removing the translator from the stack
Hello Gluster Community
We have 2x brick nodes running with replication for a volume gv0 for which set a
"gluster volume set gv0 ping-timeout 20".
In our tests it seemed there is unknown delay with this ping-timeout - we see it
timing out much later after about 35 seconds and not at around 20
On Mon, Nov 14, 2016 at 8:24 PM, Niels de Vos wrote:
> On Mon, Nov 14, 2016 at 04:50:44PM +0530, Pranith Kumar Karampuri wrote:
> > On Mon, Nov 14, 2016 at 4:38 PM, Gandalf Corvotempesta <
> > gandalf.corvotempe...@gmail.com> wrote:
> >
> > > 2016-11-14 11:50 GMT+01:00 Pranith
2016-11-14 15:54 GMT+01:00 Niels de Vos :
> Obviously this is unacceptible for versions that have sharding as a
> functional (not experimental) feature. All supported features are
> expected to function without major problems (like corruption) for all
> standard Gluster
On Mon, Nov 14, 2016 at 04:50:44PM +0530, Pranith Kumar Karampuri wrote:
> On Mon, Nov 14, 2016 at 4:38 PM, Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com> wrote:
>
> > 2016-11-14 11:50 GMT+01:00 Pranith Kumar Karampuri :
> > > To make gluster stable for VM images
Dear Team,
In the event of the failure of master1, master 2 glusterfs home directory
will become read only fs.
If we manually shutdown the master 2, then there is no impact on the file
system and all io operation will complete with out any problem.
can you please provide some guidance to
On Mon, Nov 14, 2016 at 6:16 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 14 nov 2016 13:27, "Atin Mukherjee" ha scritto:
> >
> > This would be a straight forward thing to implement at glusterd, anyone
> up for it? If not, we will take this into
Il 14 nov 2016 13:27, "Atin Mukherjee" ha scritto:
>
> This would be a straight forward thing to implement at glusterd, anyone
up for it? If not, we will take this into consideration for GlusterD 2.0.
>
I would prefer an additional parameter to the cli or a confirm,
This would be a straight forward thing to implement at glusterd, anyone up
for it? If not, we will take this into consideration for GlusterD 2.0.
On Mon, Nov 14, 2016 at 10:28 AM, Mohammed Rafi K C
wrote:
> I think it is worth to implement a lock option.
>
> +1
>
>
> Rafi
hi guys,
should rdiff-backup struggle to backup a glusterfs mount?
I'm trying glusterfs and was hoping, expecting I could keep
on rdiff-backing up data. I backup directly to
local(non-gluster) storage(xfs) and get this:
$ rdiff-backup --exclude-other-filesystems
--exclude-symbolic-links
2016-11-14 12:51 GMT+01:00 Lindsay Mathieson :
> Of course if you're running a replica volume, non-dispersed you should
> only need to do lookups locally. It would be interesting to know if thats a
> optimization gluster does.
I have a replica 2 with only 2 bricks,
On 14/11/2016 9:00 PM, Gandalf Corvotempesta wrote:
Can someone explain me why Lizard is 10 times faster than gluster?
This is not a flame, I would only like to know the technical
differences between these two software
Its my understanding that with many/small file operations involving
Hi Gandalf,
Can you provide more information about your setup?
How many nodes? What disk sizes? Are they VMs or physical machines? What is the
speed of the network?
What OS are you running Lizard on , and finally how are the disks setup?
We use MooseFS, Nexenta, Gluster and Ceph here, and in
On Mon, Nov 14, 2016 at 4:38 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-11-14 11:50 GMT+01:00 Pranith Kumar Karampuri :
> > To make gluster stable for VM images we had to add all these new features
> > and then fix all the bugs Lindsay/Kevin
2016-11-14 11:50 GMT+01:00 Pranith Kumar Karampuri :
> To make gluster stable for VM images we had to add all these new features
> and then fix all the bugs Lindsay/Kevin reported. We just fixed a corruption
> issue that can happen with replace-brick which will be available in
Which data corruption issue is this? Could you point me to the bug report
on bugzilla?
-Krutika
On Sat, Nov 12, 2016 at 4:28 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 12 nov 2016 10:21, "Kevin Lemonnier" ha scritto:
> > We've had a lot of
On Sat, Nov 12, 2016 at 4:28 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 12 nov 2016 10:21, "Kevin Lemonnier" ha scritto:
> > We've had a lot of problems in the past, but at least for us 3.7.12 (and
> 3.7.15)
> > seems to be working pretty well
35 matches
Mail list logo