Well, half a million on its own does not account for the time. But if one were
to add heavily loaded servers, slower interconnect, high% of shared resources,
the numbers could add up.
I mean, this is a fairly old release. We have made improvements since then.
Having said that, the biggest improvem
So I now have two figures from two different clusters. Both are quite slow
during restarts. Having two filesystems mounted.
Cluster1 (that one that last time took very long):
Clusterlocks held by filesystem..
1788AD39151A4E76997420D62A778E65: 274258 locks
1EFA64C36FD54AB48B734A99E7F45A73: 576842
It was designed to run in prod envs.
On 07/07/2011 12:21 AM, Marc Grimme wrote:
> Sunil,
> can I query those figures during runtime of a productive cluster?
> Or might it influence the availability performance what ever?
>
> Thanks for your help.
> Marc.
> - "Sunil Mushran" wrote:
>
>> umount
Sunil,
can I query those figures during runtime of a productive cluster?
Or might it influence the availability performance what ever?
Thanks for your help.
Marc.
- "Sunil Mushran" wrote:
> umount is a two step process. First the fs frees the inodes. Then the
> o2dlm takes stock of all activ
umount is a two step process. First the fs frees the inodes. Then the
o2dlm takes stock of all active resources and migrates ones that are
still in use. This typically takes some time. But I have never heard
of it taking 45 mins.
But I guess it could be if one has a lot of resources. Lets start by
Hi,
we are using a SLES10 Patchlevel 3 with 12 Nodes hosting tomcat application
servers.
The cluster was running some time (about 200 days) without problems.
Recently we needed to shutdown the cluster for maintenance and experianced very
long times for the umount of the filesystem. It took somet