We would like to use windows client to authenticate with an Kerberos Server
(KDC), get a windows user's roaming profile and then map the user's afs
homespace on the machine.
From What I have read the user Kerberos credentials have to be mapped to a
windows user account, defined locally or in
Derrick et al,
~:maverick uname -a
Linux maverick 2.4.21-53.ELsmp #1 SMP Wed Nov 14 03:46:35 EST 2007
x86_64 x86_64 x86_64 GNU/Linux
~:maverick strings /usr/vice/etc/afsd | grep OpenAFS
@(#) OpenAFS 1.4.6 built 2008-03-04
~:maverick pwd
/afs/rcf/user/jblaine
~:maverick tar xf
On Mon, Apr 21, 2008 at 12:40 PM, Jeff Blaine [EMAIL PROTECTED] wrote:
Derrick et al,
~:maverick uname -a
Linux maverick 2.4.21-53.ELsmp #1 SMP Wed Nov 14 03:46:35 EST 2007 x86_64
x86_64 x86_64 GNU/Linux
~:maverick strings /usr/vice/etc/afsd | grep OpenAFS
@(#) OpenAFS 1.4.6 built
If you wish the test something, please test 1.4.7-pre3
http://www.openafs.org/release/openafs-1.4.7pre3.html
Jeff Blaine wrote:
Derrick et al,
~:maverick uname -a
Linux maverick 2.4.21-53.ELsmp #1 SMP Wed Nov 14 03:46:35 EST 2007
x86_64 x86_64 x86_64 GNU/Linux
~:maverick strings
Is there substantial reason to believe that this has been
addressed between 1.4.6 and 1.4.7pre3? The boxes that (I
would guess) experience this are beefy/fast boxes. Our
hosts in question are production machines, not ones we can
perform OpenAFS testing on unless there is a clear case
for the
On Mon, Apr 21, 2008 at 12:55 PM, Jeff Blaine [EMAIL PROTECTED] wrote:
Is there substantial reason to believe that this has been
addressed between 1.4.6 and 1.4.7pre3?
Nope. I can pretty much assure it it's not fixed there.
___
OpenAFS-info mailing
On Apr 21, 2008, at 12:46 PM, Derrick Brashear wrote:
Obviously we need to revisit this. For the record I have never
produced it on my own test hardware.
I've never seen this occur on any of our numerous Linux machines.
Granted, they're running 2.6.x and not 2.4.x.
--
Mike Garrison
In message [EMAIL PROTECTED],Mike Garrison write
s:
On Apr 21, 2008, at 12:46 PM, Derrick Brashear wrote:
Obviously we need to revisit this. For the record I have never
produced it on my own test hardware.
I've never seen this occur on any of our numerous Linux machines.
Granted, they're
On solaris the recommended filesystem of use for building afs filesystem
is ufs without logging turned on. This is really a very primitive file
system, and it loses a lot of the new features in the filesystems.
Has anybody used zfs successfully and in what configuration ?
a) striped zfs
At my last job, we had switched to using ZFS exclusively for our AFS
servers, and had great luck with it. Look back in the archives of this
list for discussion of it, and check out one of my ex-coworker's
presentations from the 2007 AFS workshop on just that subject:
On Apr 21, 2008, at 4:23 , Didi wrote:
If by unknown you mean nameless, that's not what the patch does.
Such a patch would not even have been considered.
I agree that hiding this information in some cases might not be
optimal, but the main problem is that through this the 'groups'
command
Chas Williams (CONTRACTOR) wrote:
In message [EMAIL PROTECTED],Mike Garrison write
s:
On Apr 21, 2008, at 12:46 PM, Derrick Brashear wrote:
Obviously we need to revisit this. For the record I have never
produced it on my own test hardware.
I've never seen this occur on any of our numerous
Prasun Gupta [EMAIL PROTECTED] writes:
On solaris the recommended filesystem of use for building afs filesystem
is ufs without logging turned on.
Where is this? We should update it. That's the recommendation for a
*cache* file system, but not for the server.
--
Russ Allbery ([EMAIL
Didi [EMAIL PROTECTED] writes:
the main problem is that through this the 'groups'
command becomes utterly useless and confused quite a lot of users.
$ groups
users id: cannot find name for group ID 1091323188
If you would like that numeric groupid to resolve to some alphanumeric
group name,
On Apr 21, 2008, at 11:41 AM, Russ Allbery wrote:
Prasun Gupta [EMAIL PROTECTED] writes:
On solaris the recommended filesystem of use for building afs
filesystem
is ufs without logging turned on.
Where is this? We should update it. That's the recommendation for a
*cache* file system,
Robert Banz [EMAIL PROTECTED] writes:
Well, the issue was if you're using it on a server, and using what a lot
of people still consider the default (the inode fileserver), apocalyptic
dataloss may occur.
Oh, right, I completely forgot about that.
I would say that in addition to recommending
Russ Allbery wrote:
I would say that in addition to recommending people use logging with ufs
(or better, zfs!), that we should also push for deprecation of the inode
fileserver ;)
I think it's generally a good idea to stick with one server implementation
on all platforms since that way
We currently run with a cache set at boot time at 75% of the partition
size, and this has reduced the frequency of the problem to close enough
to zero for us. At previous higher values (85% ??) we still saw this on
an infrequent but regular basis (across 100s of hosts).
Every one of our
On Apr 21, 2008, at 1:10 PM, Russ Allbery wrote:
Robert Banz [EMAIL PROTECTED] writes:
Well, the issue was if you're using it on a server, and using what
a lot
of people still consider the default (the inode fileserver),
apocalyptic
dataloss may occur.
Oh, right, I completely forgot
Robert Banz [EMAIL PROTECTED] writes:
On Apr 21, 2008, at 1:10 PM, Russ Allbery wrote:
I think it's generally a good idea to stick with one server
implementation on all platforms since that way everyone runs the same
(tested) code, but I seem to recall the migration from inode to namei
is
Russ Allbery wrote:
Robert Banz [EMAIL PROTECTED] writes:
On Apr 21, 2008, at 1:10 PM, Russ Allbery wrote:
I think it's generally a good idea to stick with one server
implementation on all platforms since that way everyone runs the same
(tested) code, but I seem to recall the
Jason Edgecombe wrote:
Russ Allbery wrote:
Robert Banz [EMAIL PROTECTED] writes:
On Apr 21, 2008, at 1:10 PM, Russ Allbery wrote:
I think it's generally a good idea to stick with one server
implementation on all platforms since that way everyone runs the same
(tested) code, but I
Jason Edgecombe [EMAIL PROTECTED] writes:
It does error out. A namei fileserver will refuse to start and log an
error message if a vice partition used to be inode. This happens even if
you run rm -fr *. I had to run mkfs/newfs on my vice partitions in
order to switch formats -- after moving
Russ Allbery wrote:
Okay, that makes me feel better about changing defaults, although we
probably need to keep providing inode packages as well, so it would
probably mean two builds for Solaris.
Or we should be providing a single package that includes both binaries
and permits the correct one
In message [EMAIL PROTECTED],David Thompson writes:
i suspect you will only see this bug if your filesystem containing the
cache is very close to full.
We currently run with a cache set at boot time at 75% of the partition
size, and this has reduced the frequency of the problem to close enough
This seems to have been discussed quite a bit already, but we also moved
our fileservers from inode to namei on ZFS in Solaris. Fortunately, this
was part of a hardware upgrade, so all I had to do was set up the new
file servers using ZFS, and use vos move to move over the volumes. Once
it was
I am out of the office until 04/28/2008.
I'm traveling on business, and will be checking email on daily basis, but
responses may be delayed.
If this is an emergency, contact my manager Paula Stewart.
Note: This is an automated response to your message [OpenAFS-announce]
Google Summer of Code
27 matches
Mail list logo