It was about halo enabled. When i disabled mounting was succesfull. It
seems just enabling is not enough.
So far i am happy with currwnt situation just need to figured out whether
all data is replicated. With big and even small files, replication is fast.
I tested with GB files.
Just need to know
Hi,
Fortunately I am playing in a sandbox right now, but I am good and stuck
and hoping someone can point me in the right direction.
I have been playing for about 3 months with a gluster that currently has
one brick. The idea is that I have a server with data, I need to
migrate that server
3.12.14 working fine in production for file access
you can find vol and mount settings in mailing list archive
On Tue, Oct 30, 2018 at 11:05 AM Jeevan Patnaik wrote:
> Hi All,
>
> I see gluster 3 has reached end of life and gluster 5 has just been
> introduced.
>
> Is gluster 4.1.5 stable
simple working solution for such cases is rebuilding volume with .glusterfs
recover dead node, create fresh bricks, copy files there and then attr them
to regenerate .glusterfs
something like mentioned here
https://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html
On Mon, Oct
nufa helps you write to local brick, if replication is involved it will
still copy it to other bricks (or suppose to do so)
what might be happening is that when initial file was created other nodes
were down and it didn't replicate properly and now heal is failing
check your
gluster vol heal
Try adding routes so it can connect to all.
Also curious if fuse mount does need access to all nodes. Supposedly it
does wright to all at the same time unless you have halo feature enabled.
v
On Sun, Oct 28, 2018 at 1:07 AM Oğuz Yarımtepe
wrote:
> My two nodes are at another vlan. Should my
The following link says: the issues just exist in v3.12 series and above.
https://security-tracker.debian.org/tracker/CVE-2018-10924
I have look into the code on v3.11. There is no commit that introduced
the issue.
So I think v3.11 doesn't require the CVE patch.
--Hongzhi
On 10/30/2018
Hello,
Since I upgraded my 3-node (with arbiter) GlusreFS from 3.12.14 to 4.1.5 I see
quite a lot of the following error message in the brick log file for one of my
volumes where I have quota enabled:
[2018-10-21 05:03:25.158311] W [rpc-clnt.c:1753:rpc_clnt_submit]
0-myvol-private-quota:
Hi All,
I see gluster 3 has reached end of life and gluster 5 has just been
introduced.
Is gluster 4.1.5 stable enough for production deployment? I see by default
gluster docs point to v3 only and there are no gluster docs for 4 or
5. Why so? And I'm mainly looking for a stable gluster
Though that kind of upgrade is untested, it should work in theory.
If you can afford down time, you can certainly do an offline upgrade safely.
On October 29, 2018 11:09:07 PM PDT, Igor Cicimov
wrote:
>Hi Amir,
>
>On Tue, Oct 30, 2018 at 4:32 PM Amar Tumballi
>wrote:
>
>> Sorry about this.
>>
Hi Pranith,
Do you know which versions have the problem about fsync?
https://bugzilla.redhat.com/show_bug.cgi?id=1611785#c1
--Hongzhi
On 10/30/2018 05:20 PM, Hongzhi, Song wrote:
Hi Pranith and other friends,
Does this CVE apply for glusger-v3.11.1?
I applied the patch for v3.11.1. There
On Tue, Oct 30, 2018 at 2:51 PM Hongzhi, Song
wrote:
> Hi Pranith and other friends,
>
> Does this CVE apply for glusger-v3.11.1?
>
It was later found to be not a CVE, only a memory leak.
No, this bug is introduced in 3.12 branch and fixed in 3.12 branch as well.
Patch that introduced leak:
Hi Pranith and other friends,
Does this CVE apply for glusger-v3.11.1?
I applied the patch for v3.11.1. There was an issue that the files
was read-only after mounting the volume.
When I try to write the file with vim tool, it prompts "Transport
endpoint is not connected ".
Hopefully waiting
Hi Amir,
On Tue, Oct 30, 2018 at 4:32 PM Amar Tumballi wrote:
> Sorry about this.
>
> Please refer this thread:
> https://lists.gluster.org/pipermail/gluster-devel/2018-October/055637.html
> (https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.5/Debian/)
>
>
14 matches
Mail list logo