I'm really sorry, I've wrote to the bad mailing list. :(
Il 26 ott 2016 8:30 AM, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> ha scritto:
> As I'm planning some server migrations and a new mail architecture, i
> would like to create an HA cluster
>
> Any advice on which kind of sh
As I'm planning some server migrations and a new mail architecture, i
would like to create an HA cluster
Any advice on which kind of shared storage should i use? Are gluster
performances with small files enough for dovecot? Any other solution?
It's mandatory to avoid any splibrains or similiar t
Hi Jackie,
"gluster vol bitrot status" should show the corrupted files gfid.
If you want to get the info from logs, it will be logged as below.
[2016-10-26 05:21:20.767774] A [MSGID: 118023]
[bit-rot-scrub.c:246:bitd_compare_ckum] 0-master-bit-rot-0: CORRUPTION
DETECTED: Object /dir1/file1 {Br
Top posting because there are multiple questions
1. Atin, it is expected to fail if you don't have RDMA device or if it is
not configured.
2. Rafi and Dennis,
I was not able to determine from logs if it really is a RDMA bug. The brick
logs suggest that brick started and even accepted clients.
Hi,
Has anyone added 3 gluster nodes to ovirt? I dont seem to be able to find
much documentation on how to do this and hence failing.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi,
Redhat documentation says that things will get logged to bitd.log, and
scrub.log. These files are pretty big - even when we only take the “ E “ log
level lines.
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Detecting_Data_Corruption.html
Your volumes named "group0" and "group1" are not replicating, according
to the volume info you included in your original email. They're both
distribute volumes with no replication.
On 10/20/2016 08:55 PM, Cory Sanders wrote:
Niels,
Thanks for your answer. Can you look at the du examples belo
This is my first attempt at running Gluster, and so far it's not going
well. I've got a cluster of 150 machines (this is in a university
environment) that were previously all mounted to an NFS share on the
cluster's head node. To make the cluster more expandable, and theoretically
increase file I/O
There were no meetings on Oct 11 or Oct 18 due to small number of
attendees. There is no meeting next week (Nov 1) due to holiday in
Bangalore. The next meeting will be Nov 8th.
Please find the minutes of today's Gluster Community Bug Triage meeting
at the links posted below.
Minutes:
https://me
Hello,
You may have three packages
centos-release-gluster36
centos-release-gluster37
centos-release-gluster38
from the CentOS extra repository
- Original Message -
From: "aparna"
To: "gluster-users"
Sent: Monday, October 24, 2016 4:29:27 PM
Subject: [Gluster-users] Epel Repo Link not
On 10/25/2016 05:42 PM, Gandalf Corvotempesta wrote:
2016-10-24 16:13 GMT+02:00 Niels de Vos :
Yes, correct. But note that different filesystems can handle bad sectors
differently, read-only filesystems is the most common default though.
In 'man 8 mount' the option "errors=" describes the diffe
2016-10-24 16:13 GMT+02:00 Niels de Vos :
> Yes, correct. But note that different filesystems can handle bad sectors
> differently, read-only filesystems is the most common default though.
>
> In 'man 8 mount' the option "errors=" describes the different values for
> ext2/3/4. Configuring it to "co
Thank you everyone !
On Tue, 2016-10-25 at 14:27 +0530, Pranith Kumar Karampuri wrote:
>
>
> On Tue, Oct 25, 2016 at 2:07 PM, Oleksandr Natalenko
> wrote:
> > Hello.
> >
> > 25.10.2016 10:08, Maxence Sartiaux wrote:
> > > I need to migrate a old 2 node cluster to a proxmox cluster with
> > > a
Hello,
some times ago it was reported that 'configure' failed (at least on some
distro) on 3.8.1 from official .tgz (see
https://www.gluster.org/pipermail/gluster-users.old/2016-August/027835.html
for corresponding discussion).
The solution is simple: calling ./autogen.sh before solved to pro
Its a +1 from me, however this model will be only successful if we
diligently provide component updates.
On Tuesday 25 October 2016, Kaushal M wrote:
> On Fri, Oct 21, 2016 at 11:46 AM, Kaushal M > wrote:
> > On Thu, Oct 20, 2016 at 8:09 PM, Amye Scavarda > wrote:
> >>
> >>
> >> On Thu, Oct 20
On Mon, Oct 24, 2016 at 7:59 PM, aparna wrote:
> Hi All,
>
> Just wondering if someone can help me. I was trying to access the below link
> :
>
> Link:
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
>
> But didn't find anything. Looking forward for your re
On Fri, Oct 21, 2016 at 11:46 AM, Kaushal M wrote:
> On Thu, Oct 20, 2016 at 8:09 PM, Amye Scavarda wrote:
>>
>>
>> On Thu, Oct 20, 2016 at 7:06 AM, Kaushal M wrote:
>>>
>>> Hi All,
>>>
>>> Our weekly community meetings have become mainly one hour of status
>>> updates. This just drains the life
Hi All,
Just wondering if someone can help me. I was trying to access the below
link :
Link:
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
But didn't find anything. Looking forward for your response.
Thanks
Aparna
___
I forgot to mention that with the first approach we need a separate tier
add brick parser. If we add these changes to the existing add-brick parser,
then the major changes are :
The word count for normal add-brick and tier-add-brick are totally different.
As the word count messes up, we need to p
On Tue, Oct 25, 2016 at 2:07 PM, Oleksandr Natalenko <
oleksa...@natalenko.name> wrote:
> Hello.
>
> 25.10.2016 10:08, Maxence Sartiaux wrote:
>
>> I need to migrate a old 2 node cluster to a proxmox cluster with a
>> replicated gluster storage between those two (and a third arbitrer
>> node).
>>
Hello.
25.10.2016 10:08, Maxence Sartiaux wrote:
I need to migrate a old 2 node cluster to a proxmox cluster with a
replicated gluster storage between those two (and a third arbitrer
node).
Id like to create a volume with a single node, migrate the data on
this volume from the old server and th
On Tue, Oct 25, 2016 at 1:38 PM, Maxence Sartiaux wrote:
> Hello,
>
> I need to migrate a old 2 node cluster to a proxmox cluster with a
> replicated gluster storage between those two (and a third arbitrer node).
>
> Id like to create a volume with a single node, migrate the data on this
> volume
Hello,
I need to migrate a old 2 node cluster to a proxmox cluster with a replicated
gluster storage between those two (and a third arbitrer node).
Id like to create a volume with a single node, migrate the data on this volume
from the old server and then reinstall the second server and add t
Hello.
25.10.2016 09:11, Pavel Cernohorsky wrote:
Unfortunately it is not
possible to use valgrind properly, because libgfapi seems to leak just
by initializing and deinitializing (tested with different code).
Use Valgrind with Massif tool. That would definitely help.
_
Hello,
I was experimenting with libgfapi a bit and found something which I
cannot explain.
I wanted to simulate a behavior of long running system, so I created a
simple program which reads a file from Gluster volume, saves it under a
new name, deletes the original file and prints out the mem
25 matches
Mail list logo