top.
Did you check dmesg for any clues ?
Best Regards,
Strahil Nikolov
On Thu, Jun 16, 2022 at 22:59, Pat Haley
wrote:
Hi Strahil,
I poked around our logs, and found this on the front-end (from the
day & time of the last time we had the issue)
Jun 15 10:51:17 mseas gd
ster-users/2019-March/035944.html
but I don't see a clear solution.
Take a look in the thread and check if it matches your symptoms.
Best Regards,
Strahil Nikolov
On Thu, Jun 16, 2022 at 18:14, Pat Haley
wrote:
Hi Strahil,
I poked around again and for brick 3 (where the fil
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email:pha...@mit.edu
Cases/RI/DO_NAPE_JASA_Paper/Uncertain_Pekeris_Waveguide_DO_MC
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email:pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu/
ected" errors. We are still getting these errors even though we have
re-established the connection. Is there a simple way to clear this error
besides rebooting the client system?
Thanks
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA
hanks
Pat
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge,
Nikolov wrote:
Sadly I have no idea why rebalance did that , so you should check the logs on
all nodes for clues.
Is there any reason why you used "force" in that command ?
Best Regards,
Strahil Nikolov
В четвъртък, 27 август 2020 г., 17:32:24 Гринуич+3, Pat Haley
написа:
more unbalanced (64G
853G 6.2T 20K). I'm killing the rebalance now. What should I do to
make sure that I get a successful rebalance?
Thanks
Pat||
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean
: on
nfs.disable: on
nfs.export-volumes: off
cluster.min-free-disk: 1%
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253
://serverfault.com/questions/357367/xfs-no-space-left-on-device-but-i-have-850gb-available
https://support.microfocus.com/kb/doc.php?id=7014318
Thanks
Pat
On 3/12/20 4:24 PM, Strahil Nikolov wrote:
On March 12, 2020 8:06:14 PM GMT+02:00, Pat Haley wrote:
Hi
Yesterday we seemed to clear an issue
: 90.9TB
Inode Count : 3906272768
Free Inodes : 3894355546
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical Engineerin
s(DSMccfzR75deg_001b)% ls PeManJob
PeManJob
mseas(DSMccfzR75deg_001b)% ls PeManJob*
PeManJob.log
On 3/10/20 8:18 PM, Strahil Nikolov wrote:
On March 10, 2020 9:47:49 PM GMT+02:00, Pat Haley wrote:
Hi,
If I understand this, to remove the "No space left on device" error I
either have to c
ter volume set cluster.min-free-disk
Can this be done while the volume is live? Does the need to be
an integer?
Thanks
Pat
On 3/10/20 2:45 PM, Pat Haley wrote:
Hi,
I get the following
[root@mseas-data2 bricks]# gluster volume get data-volume all | grep
cluster.min-free
cluster.min-fr
Hi,
I get the following
[root@mseas-data2 bricks]# gluster volume get data-volume all | grep
cluster.min-free
cluster.min-free-disk 10%
cluster.min-free-inodes 5%
On 3/10/20 2:34 PM, Strahil Nikolov wrote:
On March 10, 2020 8:14:41 PM GMT+02:00, Pat Haley wrote:
HI,
After some more
around 10:30pm
* brick4 has no such messages in its log file
Note brick1 & brick2 are on one server, brick3 and brick4 are on the
second server.
Pat
On 3/10/20 11:51 AM, Pat Haley wrote:
Hi,
We have developed a problem with Gluster reporting "No space left on
device."
k Space Free : 3.4TB
Total Disk Space : 90.9TB
Inode Count : 3906272768
Free Inodes : 3894809903
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone:
?
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
rt-volumes: off
Thanks
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213
Hi Raghavendra,
We were wondering if you have had a chance to look at this again, and if
so, did you have any further suggestions?
Thanks
Pat
On 07/06/2018 01:27 AM, Raghavendra Gowdappa wrote:
On Fri, Jul 6, 2018 at 5:29 AM, Pat Haley <mailto:pha...@mit.edu>> wrote:
Hi Raghavendra,
Our technician may have some time to look at this issue tomorrow. Are
there any tests that you'd like to see?
Thanks
Pat
On 06/29/2018 11:25 PM, Raghavendra Gowdappa wrote:
On Fri, Jun 29, 2018 at 10:38 PM, Pat Haley <mailto:pha...@mit.edu>> wrote:
Hi Ra
-ahead: on
nfs.disable: on
nfs.export-volumes: off
[root@mseas-data2 ~]#
On 06/29/2018 12:28 PM, Raghavendra Gowdappa wrote:
On Fri, Jun 29, 2018 at 8:24 PM, Pat Haley <mailto:pha...@mit.edu>> wrote:
Hi Raghavendra,
Our technician was able to try the manual setting today.
this suggest
some addition tests/changes we could try?
Thanks
Pat
On 06/25/2018 09:39 PM, Raghavendra Gowdappa wrote:
On Tue, Jun 26, 2018 at 3:21 AM, Pat Haley <mailto:pha...@mit.edu>> wrote:
Hi Raghavendra,
Setting the performance.write-behind off had a small im
ts suggest anything? If not, what further tests
would be useful?
Thanks
Pat
On 06/22/2018 07:51 AM, Raghavendra Gowdappa wrote:
On Thu, Jun 21, 2018 at 8:41 PM, Pat Haley <mailto:pha...@mit.edu>> wrote:
Hi Raghavendra,
Thanks for the suggestions. Our technician w
lla.redhat.com/show_bug.cgi?id=1084508
<https://bugzilla.redhat.com/show_bug.cgi?id=1084508>
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1214489
<https://bugzilla.redhat.com/show_bug.cgi?id=1214489>
On Thu, Jun 21, 2018 at 5:00 AM, Pat Haley m
.html
Pat
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu
Forwarded Message
Subject:gluster-nfs crashing on start
Date: Mon, 7 Aug 2017 16:05:09 +
From: Steve Postma <spos...@ztechnet.com>
To: Pat Haley <pha...@mit.edu>
*To disable kernal-nfs and enable nfs through Gluster we:*
gluster volume set data-volum
for glusterfs-ganesha-3.7.11 . Is this a specific
gluster package for compatibility with ganesha or a ganesha package
for gluster?
Does either possibility seem more likely to be what I need than the other?
Pat
On 07/07/2017 01:31 PM, Soumya Koduri wrote:
Hi,
On 07/07/2017 06:16 AM, Pat Haley
Hi All,
A follow-up question. I've been looking at various pages on nfs-ganesha
& gluster. Is there a version of nfs-ganesha that is recommended for
use with
glusterfs 3.7.11 built on Apr 27 2016 14:09:22
CentOS release 6.8 (Final)
Thanks
Pat
On 07/05/2017 11:36 AM, Pat Haley w
the gluster-NFS native server: do you know where we can find
documentation on how to use/install it? We haven't had success in our
searches.
Thanks
Pat
On 07/04/2017 05:01 AM, Soumya Koduri wrote:
On 07/03/2017 09:01 PM, Pat Haley wrote:
Hi Soumya,
When I originally did the tests I ran
this. I tried to find the fuse-mnt
logs but failed. Where should I look for them?
Thanks
Pat
On 07/03/2017 07:58 AM, Soumya Koduri wrote:
On 06/30/2017 07:56 PM, Pat Haley wrote:
Hi,
I was wondering if there were any additional test we could perform to
help debug the group write
Hi,
I was wondering if there were any additional test we could perform to
help debug the group write-permissions issue?
Thanks
Pat
On 06/27/2017 12:29 PM, Pat Haley wrote:
Hi Soumya,
One example, we have a common working directory dri_fleat in the
gluster volume
drwxrwsr-x 22 root
ive client.
Could you please provide simple steps to reproduce the issue and
collect pkt trace and nfs/brick logs as well.
Thanks,
Soumya
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering
, 2017 at 9:10 AM, Pranith Kumar Karampuri
<pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote:
On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <pha...@mit.edu
<mailto:pha...@mit.edu>> wrote:
Hi,
Today we experimented with some of the FUSE optio
-prefetch on
gluster volume set test-volume client.event-threads 4
gluster volume set test-volume server.event-threads 4
Can anything be gleaned from these observations? Are there other things
we can try?
Thanks
Pat
On 06/20/2017 12:06 PM, Pat Haley wrote:
Hi Ben,
Sorry this took so long
e gluster bricks? I want to be sure
we have an apples to apples comparison here.
-b
- Original Message -
From: "Pat Haley" <pha...@mit.edu>
To: "Ben Turner" <btur...@redhat.com>
Sent: Monday, June 12, 2017 5:18:07 PM
Subject: Re: [Gluster-users] Slow wr
normally use conv=fdatasync,
I'll look up the difference between the two and see if it affects your test.
-b
- Original Message -
From: "Pat Haley" <pha...@mit.edu>
To: "Pranith Kumar Karampuri" <pkara...@redhat.com>
Cc: "Ravishankar N" <ravis
, 2017 at 7:10 AM, Pat Haley <pha...@mit.edu
<mailto:pha...@mit.edu>> wrote:
Hi Pranith,
The "dd" command was:
dd if=/dev/zero count=4096 bs=1048576 of=zeros.txt conv=sync
There were 2 instances where dd reported 22 seconds. The output from
the
ue, May 30, 2017 at 10:36 PM, Pat Haley <pha...@mit.edu
<mailto:pha...@mit.edu>> wrote:
Hi Pranith,
I ran the same 'dd' test both in the gluster test volume and in
the .glusterfs directory of each brick. The median results (12 dd
trials in each test) are similar to b
for the same. From
there on we will start tuning the volume to see what we can do.
On Tue, May 30, 2017 at 9:16 PM, Pat Haley <pha...@mit.edu
<mailto:pha...@mit.edu>> wrote:
Hi Pranith,
Thanks for the tip. We now have the gluster volume mounted under
/home. What tests do you
Hi Pranith,
Thanks for the tip. We now have the gluster volume mounted under
/home. What tests do you recommend we run?
Thanks
Pat
On 05/17/2017 05:01 AM, Pranith Kumar Karampuri wrote:
On Tue, May 16, 2017 at 9:20 PM, Pat Haley <pha...@mit.edu
<mailto:pha...@mit.edu&g
run a small battery of tests and post the results rather than
test-post-new test-post... .
Thanks
Pat
On 05/11/2017 12:06 PM, Pranith Kumar Karampuri wrote:
On Thu, May 11, 2017 at 9:32 PM, Pat Haley <pha...@mit.edu
<mailto:pha...@mit.edu>> wrote:
Hi Pranith,
The /ho
Hi Pranith,
My question was about setting up a gluster volume on an ext4 partition.
I thought we had the bricks mounted as xfs for compatibility with gluster?
Pat
On 05/11/2017 12:06 PM, Pranith Kumar Karampuri wrote:
On Thu, May 11, 2017 at 9:32 PM, Pat Haley <pha...@mit.
?
Pat
On 05/11/2017 11:32 AM, Pranith Kumar Karampuri wrote:
On Thu, May 11, 2017 at 8:57 PM, Pat Haley <pha...@mit.edu
<mailto:pha...@mit.edu>> wrote:
Hi Pranith,
Unfortunately, we don't have similar hardware for a small scale
test. All we have is our production har
Hi Pranith,
Unfortunately, we don't have similar hardware for a small scale test.
All we have is our production hardware.
Pat
On 05/11/2017 07:05 AM, Pranith Kumar Karampuri wrote:
On Thu, May 11, 2017 at 2:48 AM, Pat Haley <pha...@mit.edu
<mailto:pha...@mit.edu>> wrot
01:27 PM, Pranith Kumar Karampuri wrote:
On Wed, May 10, 2017 at 10:15 PM, Pat Haley <pha...@mit.edu
<mailto:pha...@mit.edu>> wrote:
Hi Pranith,
Not entirely sure (this isn't my area of expertise). I'll run your
answer by some other people who are more familiar with
doubts?
On Wed, May 10, 2017 at 9:35 PM, Pat Haley <pha...@mit.edu
<mailto:pha...@mit.edu>> wrote:
Without the oflag=sync and only a single test of each, the FUSE is
going faster than NFS:
FUSE:
mseas-data2(dri_nascar)% dd if=/dev/zero count=4096 bs=1048576
to collect profiles.
On Wed, May 10, 2017 at 9:17 PM, Pat Haley <pha...@mit.edu
<mailto:pha...@mit.edu>> wrote:
Here is what I see now:
[root@mseas-data2 ~]# gluster volume info
Volume Name: data-volume
Type: Distribute
Volume ID: c162161e-2a2d-4dac-b015-
lume.
Did you change any of the options in between?
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-2
* writing to gluster disk mounted with nfs: 200 Mb/s
Pat
On 05/05/2017 08:11 PM, Pat Haley wrote:
Hi,
We redid the dd tests (this time using conv=sync oflag=sync to avoid
caching questions). The profile results are in
http://mseas.mit.edu/download/phaley/GlusterUsers
Hi,
We redid the dd tests (this time using conv=sync oflag=sync to avoid
caching questions). The profile results are in
http://mseas.mit.edu/download/phaley/GlusterUsers/profile_gluster_fuse_test
On 05/05/2017 12:47 PM, Ravishankar N wrote:
On 05/05/2017 08:42 PM, Pat Haley wrote:
Hi
on the performance numbers part for now. We
will look at the permissions one after this?
As per the profile info, only 2.6% of the work-load is writes. There
are too many Lookups.
Would it be possible to get the data for just the dd test you were
doing earlier?
On Fri, May 5, 2017 at 8:14 PM, Pat Haley
d/
to get the information.
Yeah, Let's see if profile info shows up anything interesting.
-Ravi
Thanks,
Ravi
On 04/08/2017 12:07 AM, Pat Haley wrote:
Hi,
We noticed a dramatic slowness when writing to a gluster disk
when compared to writing to an NFS disk. Specifical
This seems to have cleared itself. For future reference though, what
kinds of things should I look at do diagnose an issue like this?
Thanks
On 04/14/2017 01:16 PM, Pat Haley wrote:
Hi,
Today we suddenly experienced a performance drop in gluster: e.g.
doing an "ls" of a
s-log-level: WARNING
performance.readdir-ahead: on
nfs.disable: on
nfs.export-volumes: off
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical Enginee
volumes as a single gluster
file system volume? If not, what would be a better path?
Pat
On 04/11/2017 12:21 AM, Ravishankar N wrote:
On 04/11/2017 12:42 AM, Pat Haley wrote:
Hi Ravi,
Thanks for the reply. And yes, we are using the gluster native
(fuse) mount. Since this is not my area
mance.write-behind-window-size
options in `gluster volume set help`. Of course, even for gnfs mounts,
you can achieve fail-over by using CTDB.
Thanks,
Ravi
On 04/08/2017 12:07 AM, Pat Haley wrote:
Hi,
We noticed a dramatic slowness when writing to a gluster disk when
compared to writing to
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu/phaley
Using the gluster client rather than NFS seems to fix the problem
On 09/01/2016 02:35 PM, Pat Haley wrote:
Hi Pranith,
In attached file capture.pcap
On 09/01/2016 01:01 PM, Pranith Kumar Karampuri wrote:
You need to capture the file so that we can see the tcpdump in
wireshark to inspect
Hi Pranith,
In attached file capture.pcap
On 09/01/2016 01:01 PM, Pranith Kumar Karampuri wrote:
You need to capture the file so that we can see the tcpdump in
wireshark to inspect the uid/gid etc that are going out the wire.
On Thu, Sep 1, 2016 at 10:04 PM, Pat Haley <pha...@mit.
ampuri wrote:
hi Pat,
I think the other thing we should probably look for would be to
see the tcp dump of what uid/gid parameters are sent over network when
this command is executed.
On Thu, Sep 1, 2016 at 7:14 AM, Pat Haley <pha...@mit.e
-
On Thu, Sep 1, 2016 at 2:46 AM, Pat Haley <pha...@mit.edu
<mailto:pha...@mit.edu>> wrote:
Hi,
Another piece of data. There are 2 distinct volumes on the file
server
1. a straight nfs partition
2. a gluster volume (served over nfs)
The straight n
additional information would be helpful would be greatly appreciated
Thanks
On 08/30/2016 06:01 PM, Pat Haley wrote:
Hi
We have just migrated our data to a new file server (more space, old
server was showing its age). We have a volume for collaborative use,
based on group membership
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA 02139-4301
Hi,
Just to clarify, our main question is:
This is a distributed volume, not replicated. Can we delete the gluster
volume, remove the .glusterfs folders from each brick and recreate the
volume? Will it re-index the files on both bricks?
Thanks
On 06/02/2016 04:50 PM, Pat Haley wrote
vice startup.
Any advice you can give will be appreciated.
Thanks
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617)
this is a distributed volume, can I delete the gluster volume,
delete the .glusterfs directory on each disk and recreate the volume? Do
I need to upgrade to 3.5 or 6?
Or can you give me correct procedure to recover?
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley
() /temp_home = -1 (Invalid argument)
===
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
-03-18 11:32:00.362660] W [fuse-bridge.c:292:fuse_entry_cbk]
0-glusterfs-fuse: 62: LOOKUP() /temp_home = -1 (Invalid argument)
===
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley
)
===
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http
representation? Just curious.
Problem solved! Thanks!
Pat
On 02/05/2014 03:39 PM, Pat Haley wrote:
Hi Brian,
I tried both just using touch to create
an empty file and copying a small (1kb)
file. Neither worked.
Note: currently the disk served by gluster-0-1
is mounted as
/dev/sdb1
list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept
, for example.
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http
the glusterfsd processes, then
gluster volume start xxx force or restarting glusterd
would start them.
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept
=0x6131386b3c324551967d05c83b618a7b
trusted.glusterfs.dht=0x0001aaa9
the client returned nothing.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253
the log
/var/log/glusterfs/bricks/gluster-0-1 but limit it to no more than an
hour surrounding 2014-01-21 11:50:16. Also on that server, please
share the result of df /gluster-0-1 ; df -i /gluster-0-1.
On 01/21/2014 10:58 PM, Pat Haley wrote:
Hi Vijay,
I've put the log files in
http
the result of df /gluster-0-1 ; df -i /gluster-0-1.
On 01/21/2014 10:58 PM, Pat Haley wrote:
Hi Vijay,
I've put the log files in
http://mseas.mit.edu/download/phaley/GlusterUsers/server_logs/
http://mseas.mit.edu/download/phaley/GlusterUsers/client_logs/
On 01/21/2014 10:24 PM, Pat Haley
/21/2014 10:58 PM, Pat Haley wrote:
Hi Vijay,
I've put the log files in
http://mseas.mit.edu/download/phaley/GlusterUsers/server_logs/
http://mseas.mit.edu/download/phaley/GlusterUsers/client_logs/
On 01/21/2014 10:24 PM, Pat Haley wrote:
Hi Joe,
The peer status on all 3 showed the
proper
Haley wrote:
Hi Vijay,
I've put the log files in
http://mseas.mit.edu/download/phaley/GlusterUsers/server_logs/
http://mseas.mit.edu/download/phaley/GlusterUsers/client_logs/
On 01/21/2014 10:24 PM, Pat Haley wrote:
Hi Joe,
The peer status on all 3 showed the
proper connections. Doing
. Does this suggest
anything else I should be looking at?
As to Brian's suggestion, how exactly do I perform
a quick inode allocation test?
Thanks
Pat
On 01/17/2014 07:48 PM, Pat Haley wrote:
Hi Franco,
I checked using df -i on all 3 bricks. No brick is over
1% inode usage.
It might be worth
should be looking at?
As to Brian's suggestion, how exactly do I perform
a quick inode allocation test?
Thanks
Pat
On 01/17/2014 07:48 PM, Pat Haley wrote:
Hi Franco,
I checked using df -i on all 3 bricks. No brick is over
1% inode usage.
It might be worth a quick inode allocation test
are not written but what I seem
to see is that files are written to each brick
even after the failures. Does this suggest
anything else I should be looking at?
As to Brian's suggestion, how exactly do I perform
a quick inode allocation test?
Thanks
Pat
On 01/17/2014 07:48 PM, Pat Haley wrote:
Hi Franco
S+ 10:49 0:00 grep
gluster
On 01/21/2014 08:34 PM, Pat Haley wrote:
Also, going back to an earlier Email,
should I be concerned that in the output
from gluster volume status the
brick gluster-data:/data has an N
in the Online column? Does this suggest
an additional debugging route
for your
users.
Check volume status. I expect them to all be Y.
If there are still problems at the client, try unmounting and mounting
again.
On 01/21/2014 07:42 AM, Pat Haley wrote:
Hi,
To try to clean thing out more, I took
the following steps
1) On gluster-0-0:
`gluster peer detach
on device
when I try to write even a small file to
the gluster filesystem.
What should I look at next?
Thanks.
Pat
All three.
On 01/21/2014 08:38 AM, Pat Haley wrote:
Hi Joe,
They do appear as connected from the first
brick, checking on the next 2. If they
all show the same, is the killall
Hi Vijay,
I've put the log files in
http://mseas.mit.edu/download/phaley/GlusterUsers/server_logs/
http://mseas.mit.edu/download/phaley/GlusterUsers/client_logs/
On 01/21/2014 10:24 PM, Pat Haley wrote:
Hi Joe,
The peer status on all 3 showed the
proper connections. Doing the killall
/sdb1 21T 20T 784G 97% /mseas-data-0-1
What should we do to fix this problem or look at to diagnose
this problem?
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering
Avail Use% Mounted on
/dev/sdb1 21T 20T 784G 97% /mseas-data-0-1
What should we do to fix this problem or look at to diagnose
this problem?
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center
.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Hi Franco,
I checked using df -i on all 3 bricks. No brick is over
1% inode usage.
Thanks.
Pat
Have you run out of inodes on the underlying filesystems?
On 18 Jan 2014 05:41, Pat Haley pha...@mit.edu wrote:
Latest updates:
no error messages were found on the log files of the bricks
?00:00:00 /usr/sbin/glusterfs
--volfile-id=/gdata --volfile-server=mseas-data /gdata
root 4475 4033 0 16:35 pts/000:00:00 grep gluster
[
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean
-a
/bin/cp -f /etc/fstab_boot /etc/fstab
This seem like an awful kludge. Can someone suggest a
cleaner (and presumably more robust) solution?
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean
that was some other person from MIT?
On 12/1/12 10:57 AM, Patrick Haley wrote:
Hi,
I am sorry, I don't understand the question Did your unused interfaces
come back online?
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email
/6054da6605d9f9d1c1e99252f1d235a6.socket
root 6922 6888 0 16:14 pts/300:00:00 grep gluster
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email:
pha...@mit.edu
Center for Ocean Engineering
-to-enable-nufa-in-3.2.6
http://blog.aeste.my/2012/05/15/glusterfs-3-2-updates/
http://www.gluster.org/2012/05/back-door-async-replication/
https://github.com/jdarcy/bypass
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
)
Looking at the man page for xfs_repair indicates that
we'll have to unmount the disk first. That will have
to wait until we can schedule the down-time.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean
of files in the
XFS filesystem); but native client systems could see all files.
This was with a single brick system, I had to revert that system to
old NFS (no gluster) to resolve the issue, as some of my systems can
only NFS mount.
Hope this helps...
--Tom
On Tue, Jan 31, 2012 at 10:44 AM, Pat Haley
did not solve the problem. We have
also tried rolling back to version 3.1.4 and running the
self-heal there, but that does not solve the problem either.
Any advice you can give us would be greatly appreciated.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley
gdata.log.1 to another area?
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: pha...@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-222B
1 - 100 of 102 matches
Mail list logo