Hi Kaushal,
Hi Pierre,
Can you provide start glusterd in debug mode and provide the logs?
Please use a paste site like http://fpaste.org to share the logs.
You can start glusterd in debug mode by using the '-LDEBUG' flag,
ie.,
# glusterd -LDEBUG
The logs you've provided say that glusterd
Hi,
In case the missing directory path is known, a fresh lookup on that path will
heal the directory entry across the cluster and it will be shown on the mount
point.
e.g on the mount point: ls COMPLETE PATH of the directory.
* The directory may not get a fresh lookup on the existing mount.
Hi all:
I ask about ALU-related configuration, should be configured in any
place?
ALU Scheduler Volume example
volume bricks
type cluster/unify
subvolumes brick1 brick2 brick3 brick4 brick5
option alu.read-only-subvolumes brick5 # This option makes brick5 to be
readonly, where no
I've now stopped glusterd, unmounted the volume, restarted glusterd and
re-mounted the volume.
No change :(
I've now tried to copy the files again onto the volume, which were already
present on the bricks (but invisible from the volume side). Interestingly,
it seems to silently overwrite the
Hi Peter,
As I mentioned in my previous mail, you need to send fresh lookups on the
missing directories. :)
Susant
- Original Message -
From: Peter B. p...@das-werkstatt.com
To: gluster-users@gluster.org
Sent: Tuesday, 2 December, 2014 5:29:57 PM
Subject: Re: [Gluster-users] Folder
Hi Susant,
Am Di, 2.12.2014, 11:49 schrieb Susant Palai:
In case the missing directory path is known, a fresh lookup on that path
will heal the directory entry across the cluster and it will be shown on
the mount point.
e.g on the mount point: ls COMPLETE PATH of the directory.
* The
Am Di, 2.12.2014, 13:42 schrieb Peter B.:
Sorry, but obviously I overlooked your reply before.
When you say existing mount, do you mean I should unmount/remount it -
or should I rather create a new mount point, like e.g. /mnt/test?
I've now tried both, but the files still won't show up on the
Thank you for the assistance.
Yesterday we started to have bricks on one server randomly crash. When the
one server crashed, it would lock up the bricks on its replica as well. I
ended up upgrading to 3.5.3, and noticed in the process that the libgfrpc
and libgfxdr libraries were out of date on
Hi Kaushal,
You are great. I my last release all these data are in a directory
/var/lib/glusters/peers in different files on for the nodes except it
self. I have copied from another nodes and recreate the homogenety of
the file for my node. And glusterd start.
Thank you a lot for your prompt
You're welcome Pierre. I'm happy I could help you.
~kaushal
On Tue, Dec 2, 2014 at 7:36 PM, Pierre Léonard pleon...@jouy.inra.fr
wrote:
Hi Kaushal,
You are great. I my last release all these data are in a directory
/var/lib/glusters/peers in different files on for the nodes except it
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyhs5 and http://ur1.ca/iyhue
Thanks,
punit
On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M kshlms...@gmail.com wrote:
Hey Punit,
Could you start Glusterd in debug mode and provide the logs here?
To start it in debug mode, append '-LDEBUG'
Hey Punit,
In the logs you've provided, GlusterD appears to be running correctly.
Could you provide the logs for the time period when GlusterD attempts to
start but fails.
~kaushal
On Dec 2, 2014 8:03 PM, Punit Dambiwal hypu...@gmail.com wrote:
Hi Kaushal,
Please find the logs here :-
Hi Peter,
I tried your scenario on my setup [deleted the directory on one of the
brick{hashed}]. Hence, I don't see the directory on the mount point.
So what I tried is, created a fresh mount and sent a lookup on the missing
directory name.
e.g /mnt/fresh is your new mount point. And
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyoe5 and http://ur1.ca/iyoed
On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M kshlms...@gmail.com wrote:
Hey Punit,
In the logs you've provided, GlusterD appears to be running correctly.
Could you provide the logs for the time period when
Hi Vijay
Thank you for your reply.
ALU not be supported by future versions, then the current version of the
distributed default scheduling is what?
How to configure scheduling ?
thank you .
From: Vijay Bellur
Date: 2014-12-03 01:42
To: Song Qi; gluster-users
Subject: Re: [Gluster-users]
- Original Message -
From: David Gibbons david.c.gibb...@gmail.com
To: Krutika Dhananjay kdhan...@redhat.com
Cc: gluster-users gluster-users@gluster.org
Sent: Tuesday, December 2, 2014 6:54:59 PM
Subject: Re: [Gluster-users] Upgraded from 3.4.1 to 3.5.2, quota no longer
working
This peer cannot be identified.
[2014-12-03 02:29:25.998153] D
[glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname]
0-management: Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com
I don't know why this address is not being resolved during boot time. If
this is a valid peer, the
17 matches
Mail list logo