Are you sure using conv=sync is what you want? I normally use conv=fdatasync,
I'll look up the difference between the two and see if it affects your test.
-b
- Original Message -
> From: "Pat Haley"
> To: "Pranith Kumar Karampuri"
> Cc:
>Is it possible that this matches your observations ?
Yes that matches what I see. So 19 files is being in parallel by 19
SHD processes. I thought only one file is being healed at a time.
Then what is the meaning of disperse.shd-max-threads parameter? If I
set it to 2 then each SHD thread will
Joe,
Agree with you on turning this around into something more positive.
One aspect that would really help us decide on our next steps here is the
actual number of deployments that will be affected by the removal of the
gluster driver in Cinder. If you are running or aware of a deployment of
On Thu, Jun 01, 2017 at 01:52:23PM +, Gabriel Lindeborg wrote:
> This has been solved, as far as we can tell.
>
> Problem was with KillUserProcesses=1 in logind.conf. This has shown to
> kill mounts made using mount -a booth by root and by any user with
> sudo at session logout.
Ah, yes,
Hi,
Here are some top reminders for the 3.12 release:
1) When 3.12 is released 3.8 will be EOL'd, hence users are encouraged
to prepare for the same as per the calendar posted here.
2) 3.12 is a long term maintenance (LTM) release, and potentially the
last in the 3.x line of Gluster!
3)
On 06/01/2017 06:29 AM, lejeczek wrote:
hi everybody
I'd like to ask before I migrate - any issues with upping to 3.9.x from
3.8.12 ?
Anything especially important in changelog?
Or maybe go to 3.10.x ? For it has something great & new?
3.9 was a Short Term Maintenance (STM) release. It
Hi Niels,
I have backported that patch on Gluster 3.7.6 and we haven't seen any other
issue due to that patch.
Everything is fine till now in our testing and its going on extensively.
Regards,
Abhishek
On Thu, Jun 1, 2017 at 1:46 PM, Niels de Vos wrote:
> On Thu, Jun 01,
On Thu, Jun 01, 2017 at 01:03:25PM +0530, ABHISHEK PALIWAL wrote:
> Hi Niels,
>
> No problem we wil try to backport that patch on 3.7.6.
>
> Could you please let me know in which release Gluster community is going to
> provide this patch and date of that release?
It really depends on when
Hi all,
thank you very much for support! I filed the bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1457724
I'll try to test it again to get some errors / warnings from log.
Best regards,
Jan
On Wed, May 31, 2017 at 12:25 PM, Kaleb S. KEITHLEY
wrote:
> On 05/31/2017
Hi Niels,
No problem we wil try to backport that patch on 3.7.6.
Could you please let me know in which release Gluster community is going to
provide this patch and date of that release?
Regards,
Abhishek
On Wed, May 31, 2017 at 10:05 PM, Niels de Vos wrote:
> On Wed, May
Hi Serkan,
On 30/05/17 10:22, Serkan Çoban wrote:
Ok I understand that heal operation takes place on server side. In
this case I should see X KB
out network traffic from 16 servers and 16X KB input traffic to the
failed brick server right? So that process will get 16 chunks
recalculate our
Hi
We have a Replica 2 + Arbiter Gluster setup with 3 nodes Server1,
Server2 and Server3 where Server3 is the Arbiter node. There are several
Gluster volumes ontop of that setup. They all look a bit like this:
gluster volume info gv-tier1-vm-01
[...]
Number of Bricks: 1 x (2 + 1) = 3
[...]
Thanks for the suggestion, this solved it for us, and we probably found the
cause as well. We had performance co-pilot running and it was continously
enabling profiling on volumes...
We found the reference to the node that had the lock, and restarted
glusterd on that node, and all went well from
Hi all,
Trying to do some availability testing.
We have three nodes: node1, node2, node3. Volumes are all replica 2, across
all three nodes.
As a test we disconnected node1, buy removing the vlan tag for that host on
the switch it is connected to. As a result node2 and node3 now show node1
in
14 matches
Mail list logo