> allocation failed: out of vmalloc space - use vmalloc=3D
> to increase =
> size.
> afs_osi_Alloc: Can't vmalloc 16384 bytes.
> afsd: memCache allocation failure at 58576 KB.
> afsd: memory cache too large for available memory.
> afsd: AFS files cannot be accessed.
>
> found 0 non-empty cache fi
On Wed, Aug 24, 2005 at 03:10:02PM -0400, chas williams - CONTRACTOR wrote:
> In message <[EMAIL PROTECTED]>,Troy Benjegerdes writes:
> >avg.pl /var/cache/openafs | head
> >72068 files
> >1824410029 total bytes
> >25315 avg bytes
> >
> >[EMAIL PROTECTED]:/afs/hozed.org$ fs getcache
> >AFS using
On Wednesday, August 24, 2005 04:02:31 PM -0400 William Setzer
<[EMAIL PROTECTED]> wrote:
/usr/afs/logs # pstack 975
975:/usr/afs/bin/volserver
ff19e89c read (3, 1e6663, 1)
0003c02c FSYNC_askfs (201a52c2, 1e68f8, 95400, 2, 201a52c2, 1) + 88
00038e50 VAttachVolumeByName_r (1e697c, 1e6
Some months back, there was a thread about volserver hangs in the
OpenAFS 1.2.X series:
https://lists.openafs.org/pipermail/openafs-devel/2005-April/011872.html
I'm afraid I didn't quite follow the entire conversation, so I was
wondering: Is this problem fixed in the OpenAFS 1.3.X (and presumab
In message <[EMAIL PROTECTED]>,Troy Benjegerdes writes:
>avg.pl /var/cache/openafs | head
>72068 files
>1824410029 total bytes
>25315 avg bytes
>
>[EMAIL PROTECTED]:/afs/hozed.org$ fs getcache
>AFS using10% of cache blocks (2061671 of 2000 1k blocks)
> 2% of the cache files (99
On Aug 24, 2005, at 13:08:23, Kevin Coffman wrote:
On Wed, 24 Aug 2005, Kevin Coffman wrote:
It would be nice to have some discussion about how OpenAFS plans
to use
the keyring.
As long as the discussion is clear from the start that we are
looking for
a session semantic, one where key acce
> On Wed, 24 Aug 2005, Kevin Coffman wrote:
>
> >
> > It would be nice to have some discussion about how OpenAFS plans to use
> > the keyring.
>
> As long as the discussion is clear from the start that we are looking for
> a session semantic, one where key access is not tied to a uid, but instea
On Wed, 24 Aug 2005, Kevin Coffman wrote:
It would be nice to have some discussion about how OpenAFS plans to use
the keyring.
As long as the discussion is clear from the start that we are looking for
a session semantic, one where key access is not tied to a uid, but instead
that the key ca
On Wed, Aug 24, 2005 at 10:36:40AM -0400, chas williams - CONTRACTOR wrote:
> In message <[EMAIL PROTECTED]>,Troy Benjegerdes writes:
> >>fs getcacheparms
> >AFS using64% of cache blocks (12751138 of 2000 1k blocks)
> > 2% of the cache files (8242 of 50 files)
>
>
It would be nice to have some discussion about how OpenAFS plans to use
the keyring.
--- Forwarded Message
From: Trond Myklebust <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Date: Tue, 23 Aug 2005 17:48:18 -0700
Cc: [EMAIL PROTECTED]
Subject: [OT] Mailing list set up for discussion of kernel k
In message <[EMAIL PROTECTED]>,Troy Benjegerdes writes:
>>fs getcacheparms
>AFS using64% of cache blocks (12751138 of 2000 1k blocks)
> 2% of the cache files (8242 of 50 files)
this is really cool! a step in the right direction. can you also
compute/print out
I agree, does look quite useful.
However, in the interests of backwards compatability, I would probably
suggest adding a "-detailed" option or a new command name (fs
getcacheinfo) as oppposed to just changing the output of getcacheparms.
-- Nathan
--
In message <[EMAIL PROTECTED]>,Stefaan writes:
>I found some other info in dmesg which I overlooked first, but
>nevertheless displaying info and then crashing seems a bit harsh:
i agree. afs should check to make sure your request is reasonable.
send this to [EMAIL PROTECTED] and suggest that some
In message <[EMAIL PROTECTED]>,Jeffrey Hutzelman w
rites:
>Well, right now we use two numbers. One is a constant; the other is a
>function of the chunk size. It sounds like you're arguing for eliminating
>the constant, or at least limiting its effect as the cache size grows very
>large. Fine,
On Wed, Aug 24, 2005 at 12:27:03PM +0200, Martin MOKREJ? wrote:
> Hi,
> I have placed "ls -laSR" output of our afs tree. Unfortunately
> do not have the time to process the raw data, but hope you can gather
> something out of it. The cachesizes on the server are at the moment
> configured for onl
On Wed, Aug 24, 2005 at 07:38:57AM -0400, Todd M. Lewis wrote:
>
>
> Jeffrey Hutzelman wrote:
> >
> >We won't know unless we can collect and analyze some data. Until we do
> >that, repeatedly making changes to the autotuning algorithm isn't going
> >to make things better; it's just going to ma
Jeffrey Hutzelman wrote:
We won't know unless we can collect and analyze some data. Until we do
that, repeatedly making changes to the autotuning algorithm isn't going
to make things better; it's just going to make it unpredictable.
So, let's hear from people who are actually using large
Hi!
I have a 2.6.12-gentoo-r4 kernel, single CPU p4, SMP (HT) enabled,
preemption disabled. I'm running openafs 1.3.87.
When I start "afsd" with the parameters -memcache -chunksize 14 -afsdb
-dynroot, and when I have the following /etc/openafs/cacheinfo:
/afs:/usr/vice/cache:50 (When using
I found some other info in dmesg which I overlooked first, but
nevertheless displaying info and then crashing seems a bit harsh:
Found system call table at 0xc048f780 (pattern scan)
Starting AFS cache scan...<4>allocation failed: out of vmalloc space - use vmall
oc= to increase size.
allocation fa
Hi,
I have placed "ls -laSR" output of our afs tree. Unfortunately
do not have the time to process the raw data, but hope you can gather
something out of it. The cachesizes on the server are at the moment
configured for only 4GB, although a lot more is available on the partition.
I just used to h
When doing a backup with the AFS 1.2.13 backup system, some volume sizes
are obviously displayed erroneous. However, a restore of such a volume
succeeds.
Case 1: A volume with approximately 2.3GB was dumped. After a "backup
dumpinfo -id 1123250981", I get the following:
38 08/05/2005 16:08 2390
On Tue, 23 Aug 2005, Jeffrey Hutzelman wrote:
In message <[EMAIL PROTECTED]>,Jeffrey
Hutzelman writes:
67%-full assumption is incorrect. Perhaps we were too conservative in
bumping the average filesize assumption from 10K to 32K, and it should
really be bigger.
i dont believe that there shou
22 matches
Mail list logo