Hello,
I have a pretty straight forward configuration as below:
3 storage nodes running version 3.7.11 with replica of 3 and it using native
gluster NFS.
corosync version 1.4.7 and pacemaker version 1.1.12
I have DNS round-robin on 3 VIPs living on the 3 storage nodes.
Here is how I configure
On Thu, Sep 22, 2016 at 09:58:25AM +0530, Ravishankar N wrote:
> On 09/21/2016 10:54 PM, Pasi Kärkkäinen wrote:
> >Let's see.
> >
> ># getfattr -m . -d -e hex /bricks/vol1/brick1/foo
> >getfattr: Removing leading '/' from absolute path names
> ># file: bricks/vol1/brick1/foo
>
On 09/22/2016 12:38 PM, Pasi Kärkkäinen wrote:
On Thu, Sep 22, 2016 at 09:58:25AM +0530, Ravishankar N wrote:
On 09/21/2016 10:54 PM, Pasi Kärkkäinen wrote:
Let's see.
# getfattr -m . -d -e hex /bricks/vol1/brick1/foo
getfattr: Removing leading '/' from absolute path names
# file:
Hi Amudhan,
Thanks for the confirmation. If that's the case please try with dist-rep volume,
and see if you are observing similar behavior.
In any case please raise a bug for the same with your observations. We will work
on it.
Thanks and Regards,
Kotresh H R
- Original Message -
>
Hi Kotresh,
its same behaviour in replicated volume also, file fd opens after 120
seconds in brick pid.
for calculating signature for 100MB file it took 15m57s.
How can i increase CPU usage?, in your earlier mail you have said "To limit
the usage of CPU, throttling is done using token bucket
Hi,
It is because your switch is not performing round-robin distribution while
sending data to server (probably it can't). Usually it is enough to configure
ip-port LACP hashing to evenly distribute traffic by all ports in aggregation.
But any single tcp connections will still be using only
Hi Amudhan,
It's as of now, hard coded based on some testing results. That part is not
tune-able yet.
Only scrubber throttling is tune-able. As I have told you, because brick
process has
an open fd, bitrot signer process is not picking it up for scrubbing. Please
raise
a bug. We will take a
I would like to use Gluster as shared storage for apps deployed
through a PaaS that we are creating.
Currently I'm able to mount a gluster volume in each "compute" nodes
and then mount a sudirectory from this shared volume to each Docker
app.
Obviously this is not very secure, as also wrote on
The first preview/dev release of GlusterD-2.0 is available now. A
prebuilt binary is available for download from the release-page[1].
This is just a preview of what has been happening in GD2, to give
users a taste of how GD2 is evolving.
GD2 can now form a cluster, list peers,
Thanks for that advice. It worked. Setting the UUID in glusterd.info was
the bit I missed.
It seemed to work without the setfattr step in my particular case.
On Thu, Sep 22, 2016 at 11:05 AM, Serkan Çoban
wrote:
> Here are the steps for replacing a failed node:
>
>
> 1-
On Thu, Sep 22, 2016 at 07:20:26PM +0530, Pranith Kumar Karampuri wrote:
>On Thu, Sep 22, 2016 at 12:51 PM, Ravishankar N
><[1]ravishan...@redhat.com> wrote:
>
> On 09/22/2016 12:38 PM, Pasi KÀrkkÀinen wrote:
>
>On Thu, Sep 22, 2016 at 09:58:25AM +0530, Ravishankar N
On Thu, Sep 22, 2016 at 8:33 PM, Pasi Kärkkäinen wrote:
> On Thu, Sep 22, 2016 at 07:20:26PM +0530, Pranith Kumar Karampuri wrote:
> >On Thu, Sep 22, 2016 at 12:51 PM, Ravishankar N
> ><[1]ravishan...@redhat.com> wrote:
> >
> > On 09/22/2016 12:38 PM, Pasi KÀrkkÀinen
Here are the steps for replacing a failed node:
1- In one of the other servers run "grep thaila
/var/lib/glusterd/peers/* | cut -d: -f1 | cut -d/ -f6" and note the
UUID
2- stop glusterd on failed server and add "UUID=uuid_from_previous
step" to /var/lib/glusterd/glusterd.info and start glusterd
I set uo a dispersed volume with 1 x (3 + 1) nodes ( i do know that 3+1 is
not optimal).
Originally created in version 3.7 but recently upgraded without issue to
3.8.
# gluster vol info
Volume Name: rvol
Type: Disperse
Volume ID: e8f15248-d9de-458e-9896-f1a5782dcf74
Status: Started
Snapshot
Hi Kotresh,
I have raised bug.
https://bugzilla.redhat.com/show_bug.cgi?id=1378466
Thanks
Amudhan
On Thu, Sep 22, 2016 at 2:45 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi Amudhan,
>
> It's as of now, hard coded based on some testing results. That part is not
>
On Thu, Sep 22, 2016 at 12:51 PM, Ravishankar N
wrote:
> On 09/22/2016 12:38 PM, Pasi Kärkkäinen wrote:
>
>> On Thu, Sep 22, 2016 at 09:58:25AM +0530, Ravishankar N wrote:
>>
>>> On 09/21/2016 10:54 PM, Pasi Kärkkäinen wrote:
>>>
Let's see.
# getfattr -m .
16 matches
Mail list logo