Could you do the following on one of the nodes where you are observing high
CPU usage and attach that file to this thread? We can find what
threads/processes are leading to high usage. Do this for say 10 minutes
when you see the ~100% CPU.
top -bHd 5 > /tmp/top.${HOSTNAME}.txt
On Wed, Aug 15,
Den 15 aug. 2018 13:14 skrev Karli Sjöberg :On Wed, 2018-08-15 at 13:42 +0800, Pui Edylie wrote:> Hi Karli,> > I think Alex is right in regards with the NFS version and state.> > I am only using NFSv3 and the failover is working per expectation.OK, so I've remade the test again and it goes like
Hi,
how do i tell heketi to use glusterd2 instead of glusterd?
When i perform the topology load i get the following error:
Creating node gluster01 ... Unable to create node: New Node doesn't have
glusterd running
this suggest that is looking for glusterd in fact if i switch to glusterd
it finds
Hi,
well, as the situation doesn't get better, we're quite helpless and
mostly in the dark, so we're thinking about hiring some professional
support. Any hint? :-)
2018-08-15 11:07 GMT+02:00 Hu Bert :
> Hello again :-)
>
> The self heal must have finished as there are no log entries in
>
Thank you for the clarification.
Am Do., 16. Aug. 2018 um 09:02 Uhr schrieb Kotresh Hiremath Ravishankar <
khire...@redhat.com>:
> Hi David,
>
> With this feature enabled, the consistent time attributes (mtime, ctime,
> atime) will be
> maintained in xattr on the file. With this feature enabled,
glusterfs 3.12.12
2018-08-16 9:26 GMT+02:00 Serkan Çoban :
> What is your gluster version? There was a bug in 3.10, when you reboot
> a node some bricks may not come online but it fixed in later versions.
>
> On 8/16/18, Hu Bert wrote:
>> Hi there,
>>
>> 2 times i had to replace a brick on 2
What is your gluster version? There was a bug in 3.10, when you reboot
a node some bricks may not come online but it fixed in later versions.
On 8/16/18, Hu Bert wrote:
> Hi there,
>
> 2 times i had to replace a brick on 2 different servers; replace went
> fine, heal took very long but finally
Hi David,
With this feature enabled, the consistent time attributes (mtime, ctime,
atime) will be
maintained in xattr on the file. With this feature enabled, gluster will
not used time
attributes from backend. It will be served from xattr of that file which
will be
consistent across replica set.
Hi there,
2 times i had to replace a brick on 2 different servers; replace went
fine, heal took very long but finally finished. From time to time you
have to reboot the server (kernel upgrades), and i've noticed that the
replaced brick doesn't come up after the reboot. Status after reboot:
Hello Kotresh,
its no problem for me that the atime will be updated, importat is a
consistent mtime and ctime on the bricks of my replica set.
I have turned on both options you mentioned. After that I created a file on
my FUSE mount (mounted with noatime). But on all my bricks of the replica
set
AFAIK this feature is not available in 3.X series. It was introduced in
4.1.0
If this feature is off in gluster your atime will be updated according to
your mount options. There are several options availabe. Look at $ man mount.
I found this options: noatime, atime, nostrictatime, strictatime,
- Original Message -
> From: "Amye Scavarda"
> To: "Bhumika Goyal"
> Cc: "Gluster Devel" , "gluster-users"
>
> Sent: Monday, August 13, 2018 8:00:16 PM
> Subject: Re: [Gluster-users] Gluster Outreachy
>
> This is great!
> One thing that I'm noticing is that most proposed projects
Good morning,
today, after a gluster server reboot (including a brick not coming up,
which happens at every reboot), i've seen these error message in
glusterd.log file. Maybe i've copied+pasted too much, but i hope the
maintainers can sort it out :-)
[2018-08-16 05:22:18.818910] I [MSGID:
13 matches
Mail list logo