Hi,
Thank for the link. The problem is that the clients have the same UUID
because they have the same SID. This problem is seen at hosts.dump
(kill -XCPU pid_of_fileserver) near the line with string
lock:, for example:
---cut---
ip:360de493 port:7001 hidx:251 cbid:16297 lock:
On Sun, 5 Nov 2006, Michal Svamberg wrote:
Hi,
Thank for the link. The problem is that the clients have the same UUID
because they have the same SID. This problem is seen at hosts.dump
Ugh.
I have a question about this problem, do you consider about new option
with maximum clients with the
Michal Svamberg wrote:
Hi,
Thank for the link. The problem is that the clients have the same UUID
because they have the same SID.
Are you saying that these are Windows machines and that (a) you cloned
the machines and did not delete the AFSCache or (b) are running cloned
machines with OAFW
On Sun, 5 Nov 2006, Michal Svamberg wrote:
Thank for the link. The problem is that the clients have the same UUID
Don't do this. UUID stands for Universally Unique IDentifier; each
client _MUST_ have a different UUID. If two or more clients have the same
UUID, then the fileserver thinks that
Michal Svamberg wrote:
I have a question about this problem, do you consider about new option
with maximum clients with the same UUID that can connected to
fileserver? Or write warning message to FileLog (without debug)?
By my opinion it is not good if clients are able to shutdown a server.
We upgraded file servers to 1.4.1 (built 2006-05-05) but not solve meltdown.
Fileserver going in large mode and behaviour of meltdown are:
- first 10min: 12 idle threads are full used, only 2 idle threads
(get from rxdebug)
- 0: wprocs counting up from zero
- about next 10 min: up to 300
On Tue, 10 Oct 2006, Michal Svamberg wrote:
We upgraded file servers to 1.4.1 (built 2006-05-05) but not solve meltdown.
get a backtrace when the fileserver is not responding.
on a whim, you might also try this patch:
http://grand.central.org/rt/Ticket/Display.html?id=19461
Hello,
I don't know what is rx_ignoreAckedPacket. I have thousands (up to 5)
per 15 seconds of rx_ignoreAckedPacket on the fileserver. Number of
calls are less
(up to 1). Is posible tenth calls of rx_ignoreAckedPacket?
We have this infrastructure:
Fileservers (large mode): OpenAFS 1.3.81
On Oct 6, 2006, at 04:52, Michal Svamberg wrote:
Hello,
I don't know what is rx_ignoreAckedPacket. I have thousands (up to
5)
per 15 seconds of rx_ignoreAckedPacket on the fileserver. Number of
calls are less
(up to 1). Is posible tenth calls of rx_ignoreAckedPacket?
First,
On Fri, 6 Oct 2006, Robert Banz wrote:
On Oct 6, 2006, at 04:52, Michal Svamberg wrote:
Hello,
I don't know what is rx_ignoreAckedPacket. I have thousands (up to 5)
per 15 seconds of rx_ignoreAckedPacket on the fileserver. Number of
calls are less
(up to 1). Is posible tenth calls of
Hi Robert,
and many thanks for your reply!
Let me answer on behalf on Michal - we are co-workes and Michal has left to
vacations for couple of days ;-)
Robert Banz wrote:
First, upgrade your fileserver an actual production release, such as
1.4.1. 1.3.81 was pretty good, but, not
First, upgrade your fileserver an actual production release,
such as 1.4.1. 1.3.81 was pretty good, but, not without
problems. (1.4.1 is not without problems, but with less.)
We are thinking of that as a one (last) of possibility, but we are
running tens of linux (Debian/stable)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Robert Banz wrote:
First, upgrade your fileserver an actual production release, such
as 1.4.1. 1.3.81 was pretty good, but, not without problems. (1.4.1
is not without problems, but with less.)
We are thinking of that as a one (last) of
13 matches
Mail list logo