I had the opportunity to attend Bob's memorial at UWash a couple of weeks
ago. Quite accomplished, both professionally and as a father. While I had
known him via his MACE / Internet2 / Shibboleth work, I didn't know he also
had some AFS involvement. Coming from Stanford, I shouldn't really be
On Sep 22, 2008, at 9:11 AM, Daniel Debertin wrote:
[[ Replying to my own original post for clarification... ]]
Daniel Debertin writes:
I am able to use 'klog' as long as the user I'm authenticating as is
identical to the UNIX user I'm logged in as. If they're different I
get a long delay
On Jul 10, 2008, at 4:29 PM, Chris Kurtz wrote:
We have a Java servlet that is currently pulling data from AFS and
treating it like local disk or an NFS mount.
Is this the best way to do this? Is there a Java API or some way for
servlets to access AFS directly?
For this application,
On Jun 2, 2008, at 11:30 PM, TIARA System Man wrote:
thank you russ.. i just check my CellServDB files on each file
server. i just found one has wrong db info in the file. :$
it's generally good to have at least three DB servers (an odd number
is important!). The two most common causes
On Jun 3, 2008, at 3:53 AM, Stephan Wonczak wrote:
Hi Robert!
On Mon, 2 Jun 2008, Robert Banz wrote:
On Jun 2, 2008, at 11:30 PM, TIARA System Man wrote:
thank you russ.. i just check my CellServDB files on each file
server. i just found one has wrong db info in the file. :$
it's
Verify that the time on your db servers are well synchronized.
-rob
On Jun 2, 2008, at 9:08 PM, TIARA System Man wrote:
dear guys,
i could not move volumes. the following messages is what i
encountered:
# vos move home.cfliu maat /vicepa fs /vicepc -verbose
Could not lock entry for
[GSOC stuff deleted]
What happened with the AFS web site project?
What about putting it up on Google (summer of) Sites! ;)
-rob
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info
At my last job, we had switched to using ZFS exclusively for our AFS
servers, and had great luck with it. Look back in the archives of this
list for discussion of it, and check out one of my ex-coworker's
presentations from the 2007 AFS workshop on just that subject:
On Apr 21, 2008, at 11:41 AM, Russ Allbery wrote:
Prasun Gupta [EMAIL PROTECTED] writes:
On solaris the recommended filesystem of use for building afs
filesystem
is ufs without logging turned on.
Where is this? We should update it. That's the recommendation for a
*cache* file system,
On Apr 21, 2008, at 1:10 PM, Russ Allbery wrote:
Robert Banz [EMAIL PROTECTED] writes:
Well, the issue was if you're using it on a server, and using what
a lot
of people still consider the default (the inode fileserver),
apocalyptic
dataloss may occur.
Oh, right, I completely forgot
On Apr 8, 2008, at 9:16 AM, Christopher D. Clausen wrote:
David Bear [EMAIL PROTECTED] wrote:
I seem to distantly recall some discussion about storing maildir
directories on openafs, but I don't remember if it was safe,
discouraged, or otherwise problematic. Any one see problems with
putting
On Apr 8, 2008, at 9:49 AM, Russ Allbery wrote:
David Bear [EMAIL PROTECTED] writes:
I seem to distantly recall some discussion about storing maildir
directories on openafs, but I don't remember if it was safe,
discouraged, or otherwise problematic. Any one see problems with
putting
http://www.nofocus.org/maildir/
If you're interested. The patches are a little out of date, but I
could pull the most up-to-date ones and put them up there if there's
interest.
Personally, I've abandoned them and switched to Cyrus.
-rob
On Apr 8, 2008, at 9:25 AM, Robert Banz wrote
Just curious,
What makes you think running salvage is a good thing? I had gotten to
the point where I would avoid running it like the plague -- using
tools such as fast-restart -- and in the time I was running fast-
restart, which included some rather nasty power events which took
things
On Apr 3, 2008, at 10:06 AM, Chas Williams (CONTRACTOR) wrote:
In message [EMAIL PROTECTED],Robert
Banz write
s:
What makes you think running salvage is a good thing? I had gotten to
the point where I would avoid running it like the plague -- using
running salvage once in a while
The way I would have implemented this functionality would be for the
file to be moved into the local client's cache and removed from the
file server since the file has now been unlinked and can therefore
not be referenced by other clients. It would then be the client's
responsibility to clean
On Apr 3, 2008, at 1:11 PM, Jeffrey Altman wrote:
Robert Banz wrote:
That wouldn't work, because the file could have been open()'d by
two different cache managers, unlinked by one, but should still be
able to be written to.
That doesn't work. Eventually the cache manager on the machine
That shouldn't be necessary at all.
On Apr 2, 2008, at 10:43 AM, Andrew Bacchi wrote:
I'm considering running a weekly salvage on all file servers from
BosConfig. Is this too often? Any reason not to? What are others
doing? Thanks.
--
veritatis simplex oratio est
Andrew Bacchi
Staff
On Mar 18, 2008, at 7:01 AM, Kim Kimball wrote:
Would this have affected clone operations as well?
It seems it would.
I'm pretty sure, yes.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
This is a dangerous approach. Linux is by far the most prevalent of
the free-Unixen. If OpenAFS was to stop supporting Linux, sites
wouldn't use that as a reason to migrate away from Linux, they'd use
it as a reason to pick a different file system.
Honestly, the decision isn't ours.
AFS can't really cause san issues in that it's just another
application using your filesystem. In some cases, it can be quite a
heavy user of such, but since its only interacting through the fs, its
not going to know anything about your underlying storage fabric, or
have any way of
Here's a fragment of what I use on my AFS servers.
You really don't want to state-track your AFS stuff. You really
don't want ipfilter to have to keep track of all of that -- if your
cell is reasonably busy, those internal tables will get rather big.
I just pass in/out the frags -- you
On Sep 6, 2007, at 22:05, Derrick J Brashear wrote:
On Thu, 6 Sep 2007, Coy Hile wrote:
Hi all,
Has anyone else seen issues with the OpenAFS client causing kernel
panics
on startup on Solaris 10 update 4 (KJP 120011-14) SPARC? I find
that the
servers start fine, but when
memcache is much faster than the disk cache. memcache will not get
any
better if no one ever uses it so the openafs developers can get some
bug reports. i think memcache has improved quite a bit (but it could
be better, i need to submit some patches) over the last couple years.
i use
On Aug 23, 2007, at 10:49, Kai Moritz wrote:
* slowest: disk cache, of course.
* medium: memory cache
* fastest: ufs filesystem on a lofi-mounted block device hosted
in /
tmp (which is in-RAM)
(I know this certainly wastes some cpu/memory resources and
overhead, but... it works)
On Jul 13, 2007, at 16:58, Russ Allbery wrote:
Frank Burkhardt [EMAIL PROTECTED] writes:
I'll take the chance to ask everyone about their filesystem
preferences
for (namei-) AFS data partitions. I'm especially interested in things
like I used XYfs but moved to YZfs because of XX. Please
A couple things to check, Brian...
1) How large is your RAID-Z2 pool (# of spindles)? If it's rather
large (say, above 8), you might be running into problems from that.
2) Check to see if your fileserver process is fully resident in
memory (not swapped out.) ZFS's ARC can get VERY
I personally wouldn't want my mail storage on AFS. I say that
because, right now, it is, and I can't wait to get it off of it.
It's caused me nothing but problems, because the AFS fileserver
doesn't just seem to be made to handle the transactional intensity
of mail-land. We got
On Jun 26, 2007, at 15:08, Derrick J Brashear wrote:
On Tue, 26 Jun 2007, Robert Banz wrote:
I personally wouldn't want my mail storage on AFS. I say that
because, right now, it is, and I can't wait to get it off of it.
It's caused me nothing but problems, because the AFS fileserver
On Jun 8, 2007, at 09:33, Todd M. Lewis wrote:
Zach wrote:
I was talking to our sys admin. about allowing us users to run cgi
programs from our afs accounts (served from $HOME/www which has
system:anyuser rl) and asked if the web server could do this and
was
told first that the CMU AFS
Cyrus was designed to use a local filesystem with Unix semantics
and a working mmap()/write() combination. AFS doesn't provide
these semantics so won't work correctly.
http://www.ibr.cs.tu-bs.de/cgi-bin/dwww?type=filelocation=/usr/
share/doc/cyrus21-doc/html/faq.html
Is this still the
Hey all,
Does anyone have a good how-to for setting up and using BSM auditing
on OpenAFS under Solaris? Would also like to know if there are any
performance-related gotchas?
-rob
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
So, how was this fixed in 1.4.4, other than just turning setuid off
by default?
-rob
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info
On Mar 21, 2007, at 13:42, Derrick J Brashear wrote:
On Wed, 21 Mar 2007, Derek Atkins wrote:
Quoting Derrick J Brashear [EMAIL PROTECTED]:
On Wed, 21 Mar 2007, ted creedon wrote:
Therefore, two cells could be used, one suid and the other for
everything
else?
You could, but that's not
On Mar 17, 2007, at 08:48, Jeffrey Altman wrote:
Sergio Gelato wrote:
* Russ Allbery [2007-03-16 15:11:20 -0700]:
Jeff is talking about additional functionality that several of us
would
like to add to the Kerberos KDC that lets you create a new key
(and hence
a keytab and hence
Wouldn't a better key-update-transition plan be:
* create a new key
* stash it in the KeyFile in the next kvno slot
* wait until the servers pick it up
* update the afs key on the kdc to match the new value (make sure it
matches the kvno that you used before)
* profit.
From what I
What is required is functionality in the KDC that says generate a new
key for service X but don't use it yet.
Then you could distribute the key to your servers and after they were
all updated, you could activate the use of the new key.
That functionality could be simulated with a blah script
, and seem to have the major bugs now
worked out and feel ready to share.
You'll find the source distribution housed on our wiki page, along
with some instructions and such:
http://www.umbc.edu/oit/iss/syscore/wiki/Mod_waklog
Enjoy...
-rob
Robert Banz
Coordinator, Core Systems
[EMAIL PROTECTED
On Mar 8, 2007, at 10:20, Jim Rees wrote:
Alexander Al wrote:
I'll tell the user : can't (because he is connecting from
outside.)
...or, if he has a kerberos gss-api-ticket-passing enabled ssh on his
end, he can kinit to your realm and make the magic happen ;)
-rob
Robert Banz
On Feb 22, 2007, at 7:54 PM, Derrick J Brashear wrote:
On Thu, 22 Feb 2007, Jeffrey Altman wrote:
Tom has proposed that OpenAFS submit a hardware grant request to Sun.
It is believed that we can obtain up to $100,000 in 1U X86 boxes
that we
could use for a test infrastructure. Sun may be
Kris,
We've been seeing this same wonkiness with 11/06 as well. We're
using a locally built openssh4.1 with GSSAPI AFS tkt-getting stuff,
and it's bombing our test sparc system in a similar way.
-rob
On Dec 18, 2006, at 18:03, Kris Kasner wrote:
Hi Folks.
I'm working on integrating
Anyone (cmu folks -- poke poke) have an updated version of adm
that'll build with openafs-1.4 headers libraries without a lot of
beating?
-rob
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
I've done it, as far as data integrity goes, it's just fine.
However, I don't know if they've fixed the zfs fsync() bug --
meaning, unless you're running an AFS fileserver volserver that
have ben cleansed of fsync, your performance will be abysmal. With a
capital bad, on the order of
On Nov 8, 2006, at 10:50, Steve Devine wrote:
For years we have maintained classroom 'gateway' boxes that ran an afs
client and exported user space via samba.
These machines were always Suns of some flavor running Solaris.
Now we have been mandated to migrate to x86 and we have been
Just curious,
Is there a way (hacking the code is ok) to require, from the
fileserver side, that authenticated clients encrypt content?
-rob
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
On Oct 25, 2006, at 6:20 PM, Jeffrey Hutzelman wrote:
On Wednesday, October 25, 2006 05:58:46 PM -0400 Robert Banz
[EMAIL PROTECTED] wrote:
Is there a way (hacking the code is ok) to require, from the
fileserver
side, that authenticated clients encrypt content?
Almost
On Oct 12, 2006, at 11:35, Russ Allbery wrote:
Derrick J Brashear [EMAIL PROTECTED] writes:
The tool that Russ Allbery distributes is almost certainly more
actively
maintained.
The problem with it, though, is that you have to have a CPLEX/AMPL
license
to use it.
I've been doing some
I'd be interested in seeing it if only for what stats you're grabbing
and what I could do with them for our own trending.
It's kind of cool to do a quick graph with Crystal Reports to show
the constant growth of some people's home volumes ;) :cough:
mine :cough:
-rob
don't feel the need to say anything here, so I won't.
not needing licenses for restore means nothing about having the
software
be able to run on a current machine.
ie: can you restore on a box 5-10 years from now when you can't
find the
software and can't get it to run on any modern
On Oct 6, 2006, at 04:52, Michal Svamberg wrote:
Hello,
I don't know what is rx_ignoreAckedPacket. I have thousands (up to
5)
per 15 seconds of rx_ignoreAckedPacket on the fileserver. Number of
calls are less
(up to 1). Is posible tenth calls of rx_ignoreAckedPacket?
First,
First, upgrade your fileserver an actual production release,
such as 1.4.1. 1.3.81 was pretty good, but, not without
problems. (1.4.1 is not without problems, but with less.)
We are thinking of that as a one (last) of possibility, but we are
running tens of linux (Debian/stable)
On Oct 5, 2006, at 9:31 AM, Andrew Bacchi wrote:
I've noticed a large amount of data on two vicep partitions that
are not AFS
volumes. The data is in a directory tree under /vicep?/AFSIDat/
directory.
totaling over 8G on one server.
Is that directory normally used as a garbage dump for a
On Sep 18, 2006, at 15:02, Jeffrey Altman wrote:
That could be the bug fixed post 1.4.1
DELTA STABLE14-viced-writevalloc-dont-vtakeoffline-20060510
I had a couple problems like that lately, but it only was happening
to read-onlys. Which were a pain in the butt, since I had to zap
Right, only that for a correct flock() emulation you'd also have to
hold the necessary locks to prevent another thread from seeking
away between the two calls... ideally something that is independent
of the namei locking. And the code would gain in readability had
the ifdefs been packed
Ok, here's some weirdness for ya'll to ponder on. I've seen it on
any recent OpenAFS version I've ran (fileserver-wise) (1.2, 1.3,
1.4), and any client I've ever used.
Let's say I delete a VERY large directory from a volume. Very
large. It's got 30,000+ files.
This takes awhile.
Could you do some rxdebug calls to the fileserver next time? So we
know why it's getting unresponsive.
It could be running out of threads. I don't expect that, but it
could be ...
The 'symptoms' seem to be, for the most part, volume-specific. Slow
response to accessing that volume,
Sure, a bunch of clients talking to the same directory has scalability
problems, but if I've got a mailbox that is that is huge enough to have
these problems, it's not something I'm going to be able to effectively read
anyway. Heck, my imap client (backened by afs) only checks mail every 5
Stephan Wiesand wrote:
On Wed, 28 Dec 2005, Derek Atkins wrote:
You don't want AFS for an imap or maildir backend. You should just
Since it's void of any locks, what would be wrong with maildir in AFS?
There's a bunch of things wrong with stock maildir; I've done a lot of
work with it.
A BSD license isn't GPL compatible, either, but a BSD-licensed module
wont taint the kernel.
...and this is why the whole concept of a dynamically loaded object
truly being considered part of the work is totally insane.
-rob
___
OpenAFS-info
Dan Pritts wrote:
On Tue, Nov 22, 2005 at 08:38:31AM -0500, Joe Buehler wrote:
- AFS storage is organized into volumes, attached to one or more mount
points under the /afs tree. These volumes can be moved from server
to server while they are in use. This is great when you have to
take down a
Tim Spriggs wrote:
Isn't there something about needing a small percentage of space to be able
to keep the ext3/ext2 filesystem from fragmenting too much? Does this
apply here?
Also, is there a problem with running on ext3? I only ask because I know
openafs can not use journaling filesystems
Derrick J Brashear wrote:
On Wed, 21 Sep 2005, ed wrote:
Hello,
Why does transarc.com point to a porn site?
$15/yr is too much for IBM to pay. :)
Ever since IBM sold their PC business, they're looking to find other
profit centers.
-rob
___
Ok, here's the clarification:
A machine can be a database server or a fileserver, or both.
You have to have at least one machine providing database service. It is
preferrable that you have multiple machines -- either 3 or 5 -- 3 is
usually sufficient. It's important that they be an odd
Did OpenAFS.org need to change the compress type from gz to bz2 for
some reason? I would rather see the most common compressed type that
all uncompressors can use. Does OpenAFS.org need a license to use ZIP?
I'd vote for distributing it in both .bz2 .gz forms. .bz2 is much
more
Recompiling with the Springer Verlag sving6.sty document class produces
textbook quality compositions with automatically numbered tables of
contents, indexes and appendicies. The current version uses the article
class to support the hyperref package and the downstream converters.
Just going
Esther Filderman wrote:
On 6/10/05, ted creedon [EMAIL PROTECTED] wrote:
For what its worth, I think html documentation with hyperlinks is not the
best way to go. It just happened to get done first on the second round of
conversions.
Yes, you've made your bias clear since you started this.
Near as I can tell, the only way to get AFS in a solaris zone is to run
afsd in the global zone. This is because zones are not full
virtualization, but merely isolation from other processes and the
fair-share scheduler to allocate resources to the zones. I have not
tried it, but it seems like
Just an FYI, everything works with memcache. So, is there some known
junkage with a ufs-cache (non-logging) under Solaris10 now?
-rob
Robert Banz wrote:
Hi,
Been doing some testing/building under Solaris 10 x86, and have come up
with this error while trying to do writes:
x ./lib/afs
Hi,
Been doing some testing/building under Solaris 10 x86, and have come up
with this error while trying to do writes:
x ./lib/afs/libafsutil.a, 102796 bytes, 201 tape blocks
afs: failed to store file (27)
Filesize limit exceeded
and, a corresponding:
Mar 7 13:11:08 test86.umbc.edu afs:
69 matches
Mail list logo