Hi, Andrew:
Thank you very much for the help. Yes, your
explanation really makes sense. I buy it.
But I would like to discuss it a little bit further.
The following message was part of my previous reply to
Wendy. Just paste it here for your convenience.
# stat abc/
File: `abc/'
Size: 8192
I've looked at this problem a bit as well. My system is a 4Gb FC SAN with
a bonded GigE DLM dedicated network. Stat'ing 30,000 files in 3 minutes on
GFS isn't unreasonable considering that it must get and release the gfs
locks. In this scenario, you are averaging about 6ms per file stat. When
we di
--- Gordan Bobic <[EMAIL PROTECTED]> wrote:
> 30K files?!
> That'll take a while even on a local file system.
Not really. Last week I made a copy of the directory
on the local hard disk (ext3). See the test results
for both "ls" and "ls -la" commands:
# time ls -la | wc -l
31767
real0m2.96
On Thu, 2008-05-08 at 14:27 -0700, Ja S wrote:
> Hi, All:
>
> I used to post this question before, but have not
> received any comments yet. Please allow me post it
> again.
>
> I have a subdirectory containing more than 30,000
> small files on a SAN storage (GFS1+DLM, RAID10). No
> user applicat
Hi, Wendy:
Thanks for your so prompt and kind explanation. It is
very helpful. According to your comments, I did
another test. See below:
# stat abc/
File: `abc/'
Size: 8192Blocks: 6024 IO Block:
4096 directory
Device: fc00h/64512dInode: 1065226 Links: 2
Access: (
Ja S wrote:
Hi, All:
I used to post this question before, but have not
received any comments yet. Please allow me post it
again.
I have a subdirectory containing more than 30,000
small files on a SAN storage (GFS1+DLM, RAID10). No
user application knows the existence of the
subdirectory. In oth
30K files?!
That'll take a while even on a local file system.
Gordan
Ja S wrote:
Hi, All:
I used to post this question before, but have not
received any comments yet. Please allow me post it
again.
I have a subdirectory containing more than 30,000
small files on a SAN storage (GFS1+DLM, RAID1
Hi, All:
I used to post this question before, but have not
received any comments yet. Please allow me post it
again.
I have a subdirectory containing more than 30,000
small files on a SAN storage (GFS1+DLM, RAID10). No
user application knows the existence of the
subdirectory. In other words, the
On Thu, 2008-05-08 at 13:09 -0400, FM wrote:
> Hello,
>
> We read a lot of gfs tuning, number of nodes, etc. But how about the
> network infrastructure ?
>
> is a separate network/vlan for dlm is the way to go ? Do you tune the
> network stack to speed dlm ?
The cluster (generally) including t
Hello,
We read a lot of gfs tuning, number of nodes, etc. But how about the
network infrastructure ?
is a separate network/vlan for dlm is the way to go ? Do you tune the
network stack to speed dlm ?
In my server room, it is very simple :
GFS-1
2 directors behind the firewall (using NAT).
a
Hi,
On Thu, 2008-05-08 at 11:05 -0400, Wendy Cheng wrote:
> Just check few minutes ago ... my people page seems to have become Bob
> Peterson's people page but large amount of my old write-ups and
> unpublished patches still there. So if you type "wcheng", you probably
> will get "rpeterso" - c
Hi,
What result do Cluster Suite waits for when execute /etc/init.d/xxx status?
I guess is a value from RETVAL, if so, which one would be OK and Failed?
Thanks
Patricio Bruna V.
IT Linux Ltda.
http://www.it-linux.cl
Fono : (+56-2) 333 0578
Móvil : (+5
Ja S wrote:
Hi Wendy:
Thank you very much for the kind answer.
Unfortunately, I am using Red Hat Enterprise Linux WS
release 4 (Nahant Update 5) 2.6.9-42.ELsmp.
When I ran gfs_tool gettune /mnt/ABC, I got:
[snip] ..
There is no glock_purge option. I will try to tune
demote_secs, but I d
Hi Wendy:
Thank you very much for the kind answer.
Unfortunately, I am using Red Hat Enterprise Linux WS
release 4 (Nahant Update 5) 2.6.9-42.ELsmp.
When I ran gfs_tool gettune /mnt/ABC, I got:
ilimit1 = 100
ilimit1_tries = 3
ilimit1_min = 1
ilimit2 = 500
ilimit2_tries = 10
ilimit2_min = 3
demo
Sent via BlackBerry by AT&T
-Original Message-
From: Wendy Cheng <[EMAIL PROTECTED]>
Date: Thu, 08 May 2008 09:28:22
To:linux clustering
Subject: Re: [Linux-cluster] GFS lock cache or bug?
Ja S wrote:
> Hi, All:
>
I have an old write-up about GFS lock cache issues. Sh
Ja S wrote:
Hi, All:
I have an old write-up about GFS lock cache issues. Shareroot people had
pulled it into their web site:
http://open-sharedroot.org/Members/marc/blog/blog-on-gfs/glock-trimming-patch/?searchterm=gfs
It should explain some of your confusions. The tunables described in
Gary Romo wrote:
Is there a command that you can run to test/veryify that fencing is
working properly?
Or that it is part of the fence if you will?
I realize that the primary focus of the fence is to shut off the other
server(s).
However, when I have a cluster up, how can I determine that all
Hi, All:
I used to 'ls -la' a subdirecotry, which contains more
than 30,000 small files, on a SAN storage long time
ago just once from Node 5, which sits in the cluster
but does nothing. In other words, Node 5 is an idel
node.
Now when I looked at /proc/cluster/dlm_locks on the
node, I realised t
--- Christine Caulfield <[EMAIL PROTECTED]> wrote:
> Ja S wrote:
> > --- Ja S <[EMAIL PROTECTED]> wrote:
> >
> A couple of further questions about the master
> >>> copy of
> lock resources.
>
> The first one:
> =
>
> Again, assume:
> 1) Node A
Ja S wrote:
> --- Ja S <[EMAIL PROTECTED]> wrote:
>
A couple of further questions about the master
>>> copy of
lock resources.
The first one:
=
Again, assume:
1) Node A is extremely too busy and handle all
requests
2) other nodes are ju
20 matches
Mail list logo