[OpenAFS] Retirement of grand.central.org AFSDB records

2019-03-05 Thread Jeffrey Hutzelman
TL;DR: AFSDB records for grand.central.org and openafs.org will go away at the 
end of March.


Over the next several months, we'll be making a number of changes and 
improvements to the infrastructure behind grand.central.org and openafs.org. 
Much of this work will be mostly or completely transparent, but from time to 
time, we'll be announcing changes visible to the community in some way. This 
message discusses one such change.


The use of DNS SRV records to identify and locate AFS database servers was 
originally defined in RFC5864, published in April, 2010. This has been 
supported in OpenAFS since version 1.6.0, released in 2011. So, we feel fairly 
confident that the majority of clients include this functionality. The use of 
AFSDB records for this purpose, originally defined in RFC1183, has now been 
deprecated for many years. As with many older, obsolete, and uncommon RRTYPEs, 
a number of DNS implementations no longer support publishing AFSDB RRs.


In order to support eventual migration of DNS for central.org and openafs.org 
off of CMU infrastructure, we intend to remove the AFSDB records for the 
grand.central.org cell and for openafs.org (which is really the same cell). 
This change will be made on or around the end of March, 2019. Existing SRV 
records will remain, and no changes are currently planned to the names or IP 
addresses of the database servers for this cell.


-- Jeff


Re: [OpenAFS] afs stalled for large files, opeanfs 1.6.1, ubuntu 12.04, particular network

2014-04-18 Thread Jeffrey Hutzelman
On Sun, 2014-04-13 at 18:19 +0200, Liza M wrote:
 Hello, 
 
 I am having a rather interesting problem with opeanfs 1.6.1, ubuntu 12.04 : 
 on one particular network I do not seem to be able to work with files with 
 size ~ 1.4 kB  . 

 When trying to e.g. copy or open larger files, the afs process stalls
 and I am not even able to kill -9 it.  
 
 On the other hand, the same operation (copy or open large files with
 the same client and host machines) performed on other networks works
 fine. 
 
 
 I need to work on afs on particular network on which it is stalling,
 but have no good ideas on how to proceed. Do you have any ideas on how
 to debug and/or get afs to work on the network on which it is getting
 stalled? I pass details on my setup and fstrace logs below.

This sounds like an MTU problem -- your connection to the network in
question includes a segment with a lower MTU than those to which the
fileserver and client are directly connected.  Perhaps there is a VPN or
other tunnel in between?

Try running this command (as root) on the client:

ip link set dev eth0 mtu 1200

Replace eth0 with the name of the relevant interface.


If this makes the problem go away, you know that the problem is that
some portion of the network is not passing packets above the configured
size.


-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Linux OpenAFS EncFS?

2014-02-17 Thread Jeffrey Hutzelman
On Mon, 2014-02-17 at 13:11 -0600, Troy Benjegerdes wrote:

 So $10k for design, and $100k for implementation sufficient to protect a 
 small business's data worth between $250k, and $1M.

No, that's not what Jeff said.  What he said was that doing the design
and analysis work required to come up with an estimate could cost $10k.
I happen to think that's a bit high, but then, I'm not volunteering to
do it.

The cost of actually doing the work will be much higher, and will depend
on the design goals, including the threat model, and on how fast you
want it and what bells and whistles you want.

 Does that sound reasonable? Do you think a 10X scaling factor for data 
 protection is reasonable, as in $100K will protect data worth $1 million?

It doesn't work this way.  That's a reasonable way of estimating how
much you're willing to pay for some sort of protection, but not of
estimating how much it's actually going to cost.  If $100k is what
you're willing to pay, and you can find someone willing to do the work,
then you'll get $100k worth of protection.  I can't begin to guess what
that would look like, but whether it is sufficient to protect your $1M
asset is something you have to figure out for yourself.  I recommend
making sure your $100k contract includes a clear statement of work.


 If it's going to take a year, I should have plenty of time to figure out 
 how big of a mining farm I need to make the money to pay for it :P

Lest someone become confused... It doesn't work that way, either.
Software developers need to eat more than once a year, so on a project
this size, they'll expect a payment schedule that allows them to do so.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: DB servers quorum and OpenAFS tools

2014-01-24 Thread Jeffrey Hutzelman
On Fri, 2014-01-24 at 08:01 +, Simon Wilkinson wrote:
 On 24 Jan 2014, at 07:48, Harald Barth h...@kth.se wrote:
 
  You are completely right if one must talk to that server. But I think
  that AFS/RX sometimes hangs to loong on waiting for one server
  instead of trying the next one. For example for questions that could
  be answered by any VLDB. I'm thinking of operation like group
  membership and volume location.
 
 I have long thought that we should be using multi for vldb lookups,
 specifically to avoid the problems with down database servers. The
 problem is that doing so may cause issues for sites that have multiple
 dbservers for scalability, rather than redundancy. Instead of each
 dbserver seeing a third (or a quarter, or ...) of requests it will see
 them all. Even if the client aborts the remaining calls when it
 receives the first response, the likelihood is that the other servers
 will already have received, and responded to, the request.
 
 There are ways we could be more intelligent (for example measuring the
 normal RTT of an RPC to the current server, and only doing a multi if
 that is succeeded) But we would have to be very careful that this
 wouldn't amplify a congestive collapse.

The thing is, the OP specifically wasn't complaining about the behavior
of the CM, which remembers when a vlserver is down and then doesn't talk
to it again until it comes up, except for the occasional probe.

The problem is the one-off clients that make _one RPC_ and then exit.
They have no opportunity to remember what didn't work last time.  It
might help some for these sorts of clients to use multi, if they're
doing read-only requests, and probably wouldn't create much load.
However, for a call that results in a ubik write transaction, I'm not
entirely sure it's desirable to do a multi call.  That will require some
additional thought.


In the meantime, another thing that might be helpful is for clients
about to make such an RPC to query the CM's record of which servers are
up, and use that to decide which server to contact.  A quick VIOCCKSERV
with the fast flag set could make a big difference.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: DB servers quorum and OpenAFS tools

2014-01-23 Thread Jeffrey Hutzelman
On Thu, 2014-01-23 at 10:44 -0600, Andrew Deason wrote:


  For example in an ideal world putting more or less DB servers in
  the client 'CellServDB' should not matter, as long as one that
  belongs to the cell is up; again if the logic were for all types
  of client: scan quickly the list of potential DB servers, find
  one that is up and belongs to the cell and reckons is part of
  the quorum, and if necessary get from it the address of the sync
  site.

The problem is that you the client to scan quickly to find a server
that is up, but because networks are not perfectly reliable and drop
packets all the time, it cannot know that a server is not up until that
server has failed to respond to multiple retransmissions of the request.
Those retransmissions cannot be sent quickly; in fact, they _must_ be
sent with exponentially-increasing backoff times.  Otherwise, when your
network becomes congested, the retransmission of dropped packets will
act as a runaway positive feedback loop, making the congestion worse and
saturating the network.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: DB servers quorum and OpenAFS tools

2014-01-23 Thread Jeffrey Hutzelman
On Thu, 2014-01-23 at 14:58 +, Peter Grandi wrote:


 My real issue was 'server/CellServeDB' because we could not
 prepare ahead of time all 3 new servers, but only one at a time.

 The issue is that with 'server/CellServDB' update there is
 potentially a DB daemon (PT, VL) restart (even if the rekeying
 instructions hint that when the mtime of 'server/CellServDB'
 changes the DB daemons reread it) and in any case a sync site
 election.

 Because each election causes a blip with the client I would
 rather change the 'server/CellServDB' by putting in extra
 entries ahead of time or leaving in entries for disabled
 servers, to reduce the number of times elections are triggered.
 Otherwise I can only update one server per week...

There's not really any such thing as a new election.  Elections happen
approximately every 15 seconds, all the time.  An interruption in
service occurs only when an election _fails_; that is, when no one
server obtains the votes of more than half of the servers that exist(*).
That can happen if not enough servers are up, of course, but it can also
happen when one or more servers that are up are unable to vote for the
ideal candidate.  Generally, the rule is that one cannot vote for two
different servers within 75 seconds, or vote for _any_ server within 75
seconds of startup.


From a practical matter, what this means when restarting database
servers for config updates is that you must not restart them all at the
same time.  You _can_ restart even the coordinator without causing an
interruption in service longer than the time it takes the server to
restart (on the order of milliseconds, probably).  Even though the
server that just restarted cannot vote for 75 seconds, that doesn't mean
it cannot run in _and win_ the election.  However, after restarting one
server, you need to wait for things to completely stabilize before
restarting the next one.  This typically takes from 75-90 seconds, and
can be observed in the output of 'udebug'.  What you are looking for is
for the recovery state to be f or 1f, and for the coordinator to be
getting yes votes from every server you think is supposed to be up.

Of course, you _will_ have an interruption in service when you retire
the machine that is the coordinator.  At the moment, there is basically
no way to avoid that.  However, if you plan and execute the transition
carefully, you only need to take that outage once.



(*) Special note:  The server with the lowest IP address gets an extra
one-half vote, but only when voting for itself.  This helps to break
ties when the CellServDB contains an even number of servers.


 Ideally if I want to reshape the cell from DB servers 1, 2, 3 to
 4, 5, 6, I'd love to be able to do it by first putting in the
 'server/CellServDB' all 6 with 4, 5, 6 not yet available, and
 only at the end remove 1, 2, 3. What does not play well (if one
 of the 3 live servers fails) with the quorum :-) so went
 halfway.

This doesn't work because, with 6 servers in the CellServDB, to maintain
a quorum you must have four servers running, or three servers if one of
them is the one with the lowest address.  In fact, you can't even
transition safely from three to four servers, because once you have
four servers in your CellServDB, if the one with the lowest address goes
down before the new server is brought up, you'll have two out of four
servers up and no quorum.  

However, you can safely and cleanly transition to and from larger
numbers of servers, one server at a time.  Just be sure that before you
start up a new server, every existing server has been restarted with a
CellServDB naming that server.  Similarly, make sure to shut a server
down before removing it from remaining servers' CellServDB files.

At one point, I believe I worked out a sequence involving careful use of
out-of-sync CellServDB files and the -ubiknocoord option (gerrit #2287)
to allow safely transitioning from 3 servers to 4.  However, this is not
recommended unless you have a deep understanding of the election code,
because it is easy to screw up and create a situation where you can have
two sync sites.


I also worked out (but never implemented) a mechanism to allow an
administrator to trigger a clean transition of the coordinator role from
one server to another _without_ a 75-second interruption.  I'm sure at
some point that we'll revisit that idea.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] compilation problem for release 1.7.28

2014-01-20 Thread Jeffrey Hutzelman
On Mon, 2014-01-13 at 16:11 -0800, Wojciech Tadeusz Fedorko wrote:
 Hello,
 Tried compiling release 1.7.28 on a Ubuntu box:

The latest stable OpenAFS release for non-Windows platforms is 1.6.5.2
(though 1.6.6 is due out very soon).  1.7.x releases are for Windows
only.



___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Extract files from /vicepa

2014-01-17 Thread Jeffrey Hutzelman
On Fri, 2014-01-17 at 16:52 +0100, Stephan Wiesand wrote:
 On 2014-01-17, at 16:43, Coy Hile coy.h...@coyhile.com wrote:
 
  
  
  I have a perl script from 2005 that could do this - but only for pure r/w 
  volumes. If there's a backup or readonly clone on the same partition, it 
  will probably fail miserably. It's not polished, may have to be adapted to 
  current perl versions etc. And I think it recovered nothing but the file 
  content and the path, not mode/owner/ACLs...
  
  Setting up a server is certainly the better option and may well be easier 
  and faster. But if you're desperate enough, let me know.
  
  along not completely dissimilar lines…
  
  I’ve currently got a bunch of old data (couple hundred gigs maybe) from vos 
  dump that I’d like to be able to examine to see exactly what’s there 
  anymore. Right now, my personal cell lives on a couple VMs out in various 
  public clouds, and I haven’t got around to standing up a fileserver inside 
  the firewall yet.
  
  is there a tool (preferably stand-alone) that I could run on those old 
  dumps to copy the data out of them into a local directory on, say, my mac.  
  Then I can copy whatever of it I want to keep back into AFS later.
 
 afsdump_scan and afsdump_extract are part of the OpenAFS source tree
 but not built by default. They build fine (there are make targets for
 them) and basically work. There's an improved version by jhutz in
 /afs/grand.central.org/software/dumpscan/dumpscan-1.2.tar.gz that was
 more difficult to build IIRC.

The versions in the openafs source are ancient copies that were put
there as part of an effort to develop a test suite.  They are not up to
date and contain some significant bugs.  I asked Derrick at the time not
to fork this, but he chose to do so anyway.

The latest released version is 1.2, but I more or less have enough
changes to do a 1.3 release, including a couple of bug fixes, a couple
of new features, and somewhat better ability to build against newer
OpenAFS.  The build system is not terribly polished, but the tools do
work.  A CVS repository can be found in
/afs/cs.cmu.edu/project/systems-jhutz/Repository in the 'dumpscan'
module.

I suppose if sufficiently prodded, I could probably be convinced to
convert this to git, post it someplace more easily accessible, and do a
release.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Extract files from /vicepa

2014-01-17 Thread Jeffrey Hutzelman
On Fri, 2014-01-17 at 14:41 -0600, Andrew Deason wrote:
 On Fri, 17 Jan 2014 19:57:55 +0100
 Stephan Wiesand stephan.wies...@desy.de wrote:
 
  In a perfect world, Andrew would now pick up your CVS repository,
  merge the improvements into the github one he mentioned, and start
  submitting the results to gerrit.openafs.org.
 
 Do you mean under openafs.git, or something else? I plan on looking at
 that CVS repo and putting the changes into git somewhere, but I hadn't
 yet thought that it would go into openafs.

So, part of the problem with just merging the improvements is that
that github repository doesn't contain the complete history; it was
created by importing the 1.2 tarball.

It would certainly be possible for someone to merge the changes into
OpenAFS, but I'd rather not.  I do not subscribe to the notion that
every AFS-related tool needs to be part of the OpenAFS distribution.


  I'd love to see the state of the art of this being part of our regular
  OpenAFS releases. Obviously, there's a real need for these tools.
  
  Are there any licensing obstacles?
 
 No, the licensing seems pretty permissive. I thought there was some
 desire to have some of the simpler tooling separate from OpenAFS itself,
 like some of the other stuff in openafs-contrib. The README text makes
 it pretty clear that at least the original authors wanted that; I
 thought that jhutz and some users might agree with that, as well.

In the case of dumpscan, I _am_ the original author.  The code was
originally written prior to the release of OpenAFS, and was intended to
be distributed separately and not depend on AFS.  It does contain rx and
com_err dependencies today, but the former is needed only for a feature
that could easily be made optional, and the latter can be satisfied from
other sources.



In any case, I've been asked to produce a real repository, and will try
to do so soon.  At that point, it should be fairly easy to merge in the
changes that are in the openafs-contrib repo, and do a release.  In the
meantime, as always, patches are welcome.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: DB servers quorum and OpenAFS tools

2014-01-17 Thread Jeffrey Hutzelman
On Fri, 2014-01-17 at 14:12 -0600, Andrew Deason wrote:



 time, so presumably if we contact a downed dbserver, the client will not
 try to contact that dbserver for quite some time.

To elaborate: the cache manager keeps track of every server, and
periodically sends a sort of ping to each server to find out which
servers are up.  So, it will discover a server is down even if you're
not using it.  And, other than the periodic pings, the cache manager
will never direct a request to a server it thinks is down.  So, failover
for the CM itself is automatic, persistent, and often completely
transparent.

The fileserver works a little differently, but also keeps track of which
server it is using, fails over when that server stops responding, and
generally avoids switching when it doesn't need to.

Ubik database servers all communicate among themselves, which is a
necessary part of the database replication mechanism.  That happens even
when one server is down, but in such a way that you'll never notice a
communication failure between dbservers except in an unusual combination
of circumstances which can sometimes happen if a server goes down while
you are making a request that requires writing to the database.



I have a single-host test OpenAFS cell with 1.6.5.2, and I
have added a second IP address to '/etc/openafs/CellServDB'
with an existing DNS entry (just to be sure) but not assigned
to any machine: sometimes 'vos vldb' hangs for a while (105
seconds), doing 8 attempts to connect to the down DB server;
 
 I'm not sure how you are determining that we're making 8 attempts to
 contact the down server. Are you just seeing 8 packets go by? We can
 send many packets for a single attempt to contact the remote site.

Right.  Even though AFS communicates over UDP, which itself is
connectionless, Rx does have the notion of connections and includes a
full transport layer including retransmission, sequencing, flow control,
and exponential backoff for congestion control.  What you are actually
seeing is multiple retransmissions of a request, which may or may not be
the first packet in a new connection.  The packet is retransmitted
because the server did not reply with an acknowledgement, and the
intervals get longer because of exponential backoff, which is an key
factor in making sure that congested networks eventually get better
rather than only getting worse.


-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: DB servers quorum and OpenAFS tools

2014-01-17 Thread Jeffrey Hutzelman
On Fri, 2014-01-17 at 14:21 -0600, Andrew Deason wrote:
 On Fri, 17 Jan 2014 18:50:13 +
 p...@afs.list.sabi.co.uk (Peter Grandi) wrote:
 
  Planned to do this incremental by adding a new DB server to the
  'CellServDB', then starting it up, then removing the an old DB
  server, and so on until all 3 have been replaced in turn with
  new DB servers #4, #5, #6.
  
  At some point during this slow incremental plan there were 4
  entries in both 'CellServDB's and the new one had not been
  started up yet, and would not be for a couple days.
 
 Oh also, I'm not sure why you're adding the new machines to the
 CellServDB before the new server is up. You could bring up e.g. dbserver
 #4, and only after you're sure it's up and available, then add it to the
 client CellServDB. Then remove dbserver #3 from the client CellServDB,
 and then turn off dbserver #3.

Yup; that's the sane thing to do.  New servers should be in service
before you publish them in AFSDB or SRV records or in clients'
CellServDB files, and old servers should not be removed from service
until after they have been unpublished and all the clients you care
about have picked up the change.

 You would need to keep the server-side CellServDB accurate on the
 dbservers in order for them to work, but the client CellServDB files can
 be missing dbservers. This won't work if a client needs the sync-site,
 and the sync-site is missing from the CellServDB, but in all other
 situations, that should work fine.

This is what gerrit #2287 is about.  It adds a switch that will allow
you to configure your dbservers so that they will not be elected
coordinator.  Unpublished servers should be run with this switch, or
configured as non-voting servers, so that they don't become sync site.

Unfortunately, progress on getting that merged has been stalled for a
while, in no small part because there are changes still needed and a
related patch required significant rework, and I haven't had time to
touch this stuff in a few months.  So in the meantime, the best you can
do is insure the unpublished server will not become sync site by some
combination of careful selection of the IP addresses involved, careful
monitoring and management of the election process, and/or marking the
unpublished server as nonvoting.  Some care is required for nonvoting
servers, as in theory all dbservers must agree on who the voting servers
are.  Some mismatches are possible and even safe, but figuring out
which those are and what the behavior will be requires a thorough
understanding of what checks are done and how the voting process works.


-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Ubik trouble

2014-01-14 Thread Jeffrey Hutzelman
On Mon, 2014-01-13 at 23:22 -0600, Andrew Deason wrote:
 On Mon, 13 Jan 2014 12:32:12 -0500
 Jeffrey Hutzelman jh...@cmu.edu wrote:
 
  A worse situation arises when server A makes an RPC to server B, but the
  best route from server B back to the original source address goes via a
  different interface than the request came in on.  In this situation, the
  kernel will assign the wrong source address to server B's outgoing
  reply, which may cause Rx on server A to drop it on the floor.
 
 But we ignore the source address when the multihoming bit is set in the
 epoch.

Unfortunately, this behavior has changed a few times.  There are
actually several tests:

- On a client-mode connection, the source address is always ignored.
This
  actually should have the effect of making small requests like votes
always
  work.  But for some reason it doesn't.

- Both the source address and port are ignored if the epoch multihome
bit is
  set.  This happens on both client- and server-mode connections, except
that
  for a period of about 2 years starting in 2004, it happened on
client-mode
  connections only.

So you're right; the exact scenario I described, where a packet is
dropped by the calling client due to a mismatch of the server's address,
shouldn't happen.  The practical effect of this is that it is possible
for voting to work fine, because that's a single-round-trip operation,
while larger calls such as transferring a database update fare not so
well (or consistently).


 But all processes (that use rxkad) set the multihoming bit. Unless you
 are talking about something else? I don't even see where a process would
 manually set or clear the multihoming bit, unless it manually set the rx
 epoch, and nobody does that. The 'switch' is always flipped (or always
 not flipped, I assume, if you go back far enough).

rx sets the multihome bit by default only in kernel mode.  In user mode,
it is not set.  As it turns out, you're right -- the multihome bit is
also set by rxkad, not only for the current connection but for all
future connections, whenever a new connection is set up.  That code has
been there since AFS 3.1, but I've never noticed it before in all that
time.

This is rather significant, because it means that, except for that
two-year period 10 years ago, we should never have this sort of
multi-homing problem.  Ever.  And yet that clearly has not been the
case.  Blargh.



OK; sorry, Harald.  It seems I can't explain what you've seen after all.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Ubik trouble

2014-01-13 Thread Jeffrey Hutzelman
On Mon, 2014-01-13 at 15:00 +0100, Harald Barth wrote:

 (1) I had an old NetInfo file with a wrong IP addr lying around. This
 id _not_ prevent the server to start nor to prevent sync completely.
 The protection server synced fine and the volume location server
 refused.

The NetInfo and NetRestrict files serve as filters on the actual set of
addresses found by enumerating interfaces.  Mentioning an address that
the machine does not have has no effect.

 (2) I have a machine where the database server is known as X.Y.Z.43
 but the machine's primary IP is X.Y.Z.46. This seems to work well
 until something somewhere checks the source address of the traffic
 when sync is tried. Result: The protection server synced fine and the
 volume location server refused. 


I'm not sure why your vlserver and ptserver are behaving differently,
unless they are different versions or you have some port-specific filter
or the like.

When multi-homed Ubik servers are used, the CellServDB used by the Ubik
servers must list each server exactly once.  Further, each server's
CellServDB must use the same set of servers; it won't work to have a
server identified by one address in one copy of the file and a different
address in another copy.  The CellServDB files used by clients and
fileservers can list every address for every server, though getting a
fileserver and Ubik server on the same machine not to use the same
CellServDB can be... challenging.

The way Ubik takes advantage of multi-homed servers is to dynamically
discover the additional addresses of each server.  Whenever a server
starts, it exchanges addresses with each other server, or at least the
ones that are actually up.  Once this is done, each of those servers is
able to contact the other using any of its addresses.  However, only one
address is used at a time -- Ubik doesn't start trying a new address for
a multi-homed peer until the one it's been using stops working.

Like over-the-network communication in AFS, Ubik server-to-server
communication is done using Rx.  Particularly, the voting protocol is
based on each candidate making an RPC to each other server; the vote
is encoded as the return value of that RPC.  What that means is that a
server has no opportunity to try sending its votes to multiple
addresses; it can only send one response, which necessarily goes to the
address that made the RPC.  So, if you have a network condition which
blocks traffic between two servers in only one direction, voting will
not work.  However, this normally will sort itself out, at least
partially, because the server making the Beacon RPC will see this as a
timeout and treat the other server as down.

A worse situation arises when server A makes an RPC to server B, but the
best route from server B back to the original source address goes via a
different interface than the request came in on.  In this situation, the
kernel will assign the wrong source address to server B's outgoing
reply, which may cause Rx on server A to drop it on the floor.  This is
the problem that -rxbind is designed to work around, at the expense of
the server not really being multi-homed, at least as far as AFS is
concerned.  Whether this problem arises depends on your network
topology, but generally, you will have problems any time server B has
multiple interfaces whose best route from A uses the same outgoing
address.  This includes cases where one server has multiple addresses or
interfaces on the same subnet.



The sad truth is that in order to properly support multi-homed hosts, Rx
needs to be fixed so that it identifies all available interfaces, binds
a separate socket for each interface, and keeps track of to which
interface an incoming connection belongs, so that it can send responses
out the same interface.  This approach is necessarily used by all major
UDP-based services (e.g. DNS, NTP, DHCP), as it is the only way to
insure correct behavior on a multi-homed host.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Ubik trouble

2014-01-13 Thread Jeffrey Hutzelman
On Tue, 2014-01-14 at 00:55 +0100, Harald Barth wrote:

  The sad truth is that in order to properly support multi-homed hosts, Rx
  needs to be fixed so that it identifies all available interfaces, binds
  a separate socket for each interface, and keeps track of to which
  interface an incoming connection belongs, so that it can send responses
  out the same interface.
 
 I don't know if it has to be sent out that interface (normally it
 probaby will) but the responses need to have that source adress if I
 understand that right.

Yup.  The usual approach is to bind a socket per interface address, and
send using the socket with the address you want to use.  You're right
that the packet might not actually go out that interface, depending on
how the kernel is doing route selection.  Classically, only the
destination address was considered when selecting a route, but these
days there are many more options available.


 Currently I have no multihomed Ubik servers (besides from the one that
 should not have been, see above) and very few multihomed file servers.
 So I can not say if the rx breaks when routing is asymetric
 behaviour has given us any trouble for fileservers. At least not as
 notable as this Ubik problem where I have shot myself in the foot real
 good ;-)

It's not as noticeable for fileservers, because in the name of
supporting multihoming, fileservers and cache managers flip a switch
that makes Rx ignore the source address on incoming packets in certain
cases (and depending on which version you're running).

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] About openafs discon mode

2013-12-20 Thread Jeffrey Hutzelman
On Fri, 2013-12-20 at 17:30 +0100, nicolas prochazka wrote:
 ok,
 is it possible to define cache entrie timeout by configuration or by
 hacking code ?

Not if you don't want corrupted files.  Callback lifetime is determined
by the fileserver, and the protocol requires that clients invalidate
cached metadata when the callback expires.  Failing to do this can
result in your cache becoming stale, which in turn can cause your client
to write old data from its cache over top of new data on the server.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] About openafs discon mode

2013-12-20 Thread Jeffrey Hutzelman
On Fri, 2013-12-20 at 18:02 +0100, nicolas prochazka wrote:
 I only use afs files in read only , so it should not be a problem,
 I cannot find this parameter ( cache entry timeout ) of two hours in code.

That's because there is no parameter.  As I said, vcache entries become
invalid at the callback expiration time provided by the fileserver.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] About openafs discon mode

2013-12-20 Thread Jeffrey Hutzelman
On Fri, 2013-12-20 at 20:11 +0100, nicolas prochazka wrote:
 ok,
 so discon mode cannot work ?

I didn't say that.  However, as it turns out, the cache manager appears
to be discarding volume-level information, such as the name-to-id
mappings you need to evaluate mount points.  What this means is that
traversing mount points may not work after a while.  As far as I can
tell from a quick examination of the relevant code, this is a real bug
for which the fix will be nontrivial.

 and is it possible to define callback expiration time ?  ( hacking code is
 a solution for me, 2h=24h)

You can have the fileserver give out longer callbacks, but I don't
recommend significant increases unless your cell has a very small number
of clients.  Callback expiration is designed to bound the amount of data
the fileserver needs to keep about outstanding callbacks, and increasing
that bound will affect the server's memory usage.


In practice, it looks like the problem you're seeing is likely related
to the expiration of whole-volume callbacks on readonly volumes.  You
may be able to increase the whole-volume callback time beyond the
default 2 hours, but again, I'd be careful about this.  It's not
necessary for online operation, and will have an effect on server memory
usage.

-- Jeff


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: How to remove a bogus (127.0.1.1) server entry for readonly?

2013-12-11 Thread Jeffrey Hutzelman
On Tue, 2013-12-10 at 10:15 -0800, Russ Allbery wrote:
 Coy Hile coy.h...@coyhile.com writes:
  On 12/10/13, 4:10 AM, Harald Barth h...@kth.se wrote:
 
  $ more hosts
  127.0.0.1localhost
  127.0.1.1peter.cae.uwm.edu   peter
 
  I know various Linux distributions do
  this by default, ...
 
  Somewhat off-topic, but am I the only one who thinks that
  Linux distributions doing this is utterly brain-dead?
 
 No.  I always argue vigorously against doing this sort of nonsense in
 Debian.

Yeah, this sort of thing is pretty terrible.  I understand why people
want to do it, but it breaks things pretty badly.  Fixing this has been
part of our standard (automated) OS install process for over a decade.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Help rekeying cell when both service principals (afs@REALM and afs/cell@REALM) exist

2013-11-24 Thread Jeffrey Hutzelman
On Thu, 2013-11-21 at 10:34 -0700, Kim Kimball wrote:

 I don't have direct access to the ancient Transarc clients for testing.  
 Always a wrinkle.  I've built some tools for the older platforms but 
 tools for _all_ the ancient *NIX clients are probably not reliably 
 included in that, nor do I expect I will have a build environment on the 
 oldest ... so I may not be able to update all client software to 1.6.5 
 unless I can (miraculously) get OS updates into the mix.

Note that you don't actually have to upgrade all of OpenAFS on the
client to get the benefits of the new behavior.  You actually only need
to upgrade aklog and whatever similar tools you're using.


 We may just decide to trust anyone on the campus network and shut down 
 access to AFS servers from non campus networks, but I'd rather get at 
 least the rxkad.keytab in place -- servers are all 1.6.5 so at least 
 that much should work if we/I don't do something vile to 
 /usr/afs/etc/KeyFile ...  and if I've read the documentation correctly 
 there is at least some significant advantage to getting rid of 
 single-DES private server keys ...

Yes, there is.  These days, DES keys are fairly cheap to brute-force, in
both time and money (about one day and $100), if you have a
corresponding plaintext/ciphertext pair.  So long-lived server keys are
an attractive target.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Help rekeying cell when both service principals (afs@REALM and afs/cell@REALM) exist

2013-11-21 Thread Jeffrey Hutzelman
On Wed, 2013-11-20 at 18:05 -0500, Jeffrey Altman wrote:

 The underlying problem that Kim's cell has is that it is not permitted
 (or perhaps even physically possible) to upgrade the clients that issue
 the Kerberos afs service ticket request.  In this scenario the clients
 cannot be updated to support rxkad-kdf.  Nor can Kim assume that the
 clients understand how to use the afs/cellname@REALM syntax.

Yes, we established all of that.  What we've not established is whether
it is even possible to use non-DES service keys.  IBM AFS 3.6 clients
did not include a krb5-based aklog, so for clients of that vintage,
there is a distinct possibility that AFS service tickets are being
obtained via krb4 or kaserver.  If that is the case, non-DES service
keys _will not work_, as those protocols support only DES.

It is theoretically possible for a kaserver to issue tokens which are
really krb5 or rxkad-2b format, and which thus could use a non-DES
service key.  It is probably even possible to patch Heimdal's kaserver
to do this.  However, as far as I know, no such kaserver implementation
has ever existed.


 The other thing that Kim needs to test given the age of the clients is
 whether or not any of them suffer from an old bug that would result in
 an out of bounds array access if the service enctype has a value that is
 not recognized by the client.  If so, it may not be possible to deploy
 AES256-SHA1 enctypes.

Uh, I'm not aware of any such bug.  Can you provide a reference?
There _is_ a bug which could result in an out-of-bounds array access if
the returned token is too large, which could happen for some enctypes.
However, this is relatively unlikely if your client principal names
aren't too big.  We designed rxkad-2b such that everything would fit
within the smaller limit even with maximal-size client principal names,
but that was using DES, and the block size for the AES enctypes is
larger.


  The upgrade notes discuss the difference between 'rxkad-k5' and
  'rxkad-kdf' upgrades, and that the latter is the only one that
  permits getting rid of the single-DES enctypes for authentication.
 
 rxkad-k5 prevents the use of DES for service ticket encryption.
 rxkad-kdf provides a method of deriving a DES key from a non-DES key.
 In all cases, a 56-bit + parity key is used for the authentication
 challenge/response between an AFS RX connection initiator and the acceptor.

Correct.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Help rekeying cell when both service principals (afs@REALM and afs/cell@REALM) exist

2013-11-20 Thread Jeffrey Hutzelman
On Mon, 2013-11-11 at 08:42 -0700, Kim Kimball wrote:

 I've got clients going back as far as Transarc 3.6 -- don't ask   
 there are clients that cannot be changed/rebooted/updated due to 
 extreme sensitivity to change.

What software are these ancient clients using to get tokens?  klog?
Something else?

In general, if they are using anything based on krb5 and/or krb524, you
can use a stronger service key enctype, no matter how old they are.  You
will need to arrange for your KDC to be willing to use DES _session_
keys, because these older clients can't handle anything else.

If they are using something based on krb4 or kaserver, then you have no
choice but to retain the DES service key.  In this case, IMHO you are
best off not changing any keys; as long as one AFS service principal has
an active DES key, you gain no security benefit by upgrading the other.


If both principals are in use, then they must have different kvnos.  The
KeyFile format is not capable of storing multiple keys with the same
kvno.


I see no benefit to you in using the afs/cellname form, if you still
have clients that will work only with the old form.  There are as yet no
clients that do not support the old-style principal name.  We have
continued to use that name for exclusively here, as we've done for as
long as AFS has used Kerberos-based authentication.


-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: aklog error: unknown RPC error (-1765328184) while getting AFS tickets allow_weak_enctypes may be required in the Kerberos configuration

2013-11-11 Thread Jeffrey Hutzelman
On Fri, 2013-11-08 at 10:19 -0600, Andrew Deason wrote:

 Part of the protocol that OpenAFS uses for authenticated communication
 over the network uses a short-term DES key. Semi-recently, Kerberos
 implementations started not allowing DES to be used by default, to
 encourage people to not use DES, and to make the usage of DES more
 visible. With OpenAFS, you currently do not have a choice, and we must
 get a DES key from Kerberos, since that is the only thing the rxkad
 protocol allows.

You mean, unless you've upgraded your servers to 1.6.5 or newer, have
provisioned them with an rxkad.keytab containing non-DES service keys,
and are using a sufficiently recent aklog, such as the one from 1.6.5.
When those conditions are satisfied, you still end up using fcrypt, but
you don't need Kerberos tickets with DES keys.  See OPENAFS-SA-2013-003
for more information.  Visit https://www.openafs.org/security/ for a
list of OpenAFS security advisories including, in this case, detailed
instructions on deploying OpenAFS with non-DES keys.

Note that this doesn't change the fact that you are and will be using a
relatively weak modified DES for data encryption until rxgk is ready.
However, the point of rxkad-kdf is to eliminate the need for the KDC or
any part of Kerberos to know or care that you are using DES, which is
the cause of the error in question.


-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Windows and Mac auto-update?

2013-10-14 Thread Jeffrey Hutzelman
On Sat, 2013-10-12 at 12:51 -0400, step...@physics.unc.edu wrote:
 What's the current thinking (plans?) regarding auto-update functionality 
 for the Windows and Mac OpenAFS client packages?

No thanks.  The infrastructure from which these software packages are
distributed is operated on a volunteer basis using donated equipment,
facilities, and network service.  Such an update mechanism would likely
result in a significant increase in load which we are not prepared to
handle.

Unlike major operating systems and web browsers, OpenAFS does not
release security-critical updates every few weeks.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Questions about multihoming servers

2013-10-02 Thread Jeffrey Hutzelman
On Wed, 2013-09-25 at 11:33 -0400, Jeffrey Altman wrote:

 All that logic does is IP address aliasing for the purpose of elections.
  However, it does not permit the use of multiple addresses.  UBIK does
 not distribute RPCs across all of the DISK_UpdateInterfaceAddr() listed
 addresses.  It always uses the address in the CellServDB.  AND you
 cannot put multiple addresses for a server in the server's CellServDB.

No.  It always _starts_ by using the address in the CellServDB.  Once
addresses have been exchanged, Ubik will switch to a different address
if the first one fails.  This affects both elections (the VOTE service)
and replication (the DISK service).


 Go back to the original posting.  The reason for adding multiple
 interfaces was to increase throughput on the server.

The OP doesn't say that.  He asks for an opinion on link aggregation vs
multihoming, either of which may be intended to provide increased
throughput, redundancy, or both.


   If only one
 address is used for UBIK replication that is not multihomed support.

Only one address _at a time_ is used.  That certainly qualifies not only
as supporting multihomed servers but taking advantage of it.


 For the DB clients (cache manager, pts, vos, etc) which use a different
 CellServDB from the server's CellServDB it is possible to list all of
 the public addresses.  The same is true if DNS SRV and AFSDB records are
 used.  However, each address appears to the client as a unique server.
 This is fine for most situations but it also wasteful.  The DB clients
 do not have access to the list of registered addresses.

Nor do they really need such access.  Sure, if a connection goes down
you might prefer to try a different server rather than an alternate
address of the same server.  Of course, if the failed component is the
router leading to the unreachable interface, that may be exactly the
wrong strategy.  Fortunately, cache managers regularly check on dbserver
availability and will generally not send real requests to a server
already known to be down.


All of that said, in general, link aggregation should be preferred over
having a server with multiple interfaces on the same subnet.  The latter
arrangement offers very few benefits and generally does not result in
improved throughput.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Questions about multihoming servers

2013-10-02 Thread Jeffrey Hutzelman
On Wed, 2013-09-25 at 11:42 -0500, Andrew Deason wrote:

 if 15640 still occurs, that's a bug

15640 was not a bug in OpenAFS when it was submitted 9 years ago, and
it's still not a bug in OpenAFS.  If you want multi-homed dbservers to
work, then the primary addresses listed for each server in Ubik's
CellServDB must be the one your operating system will actually use when
sending packets to other servers' primary addresses.  Otherwise, they
will not be recognized as coming from a legitimate server.

There are certainly things we could do to improve this situation, such
as extending the CellServDB format to allow providing Ubik with a
complete list of each server's addresses, or extending the address
exchange that happens when a new server starts up such that the initial
RPC does not have to be made from the new server's primary address.
However, the absence of those enhancements is not a bug and does not
mean that OpenAFS does not support multihomed databse servers or that it
cannot take advantage of multiple addresses on such servers.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Questions about multihoming servers

2013-10-02 Thread Jeffrey Hutzelman
On Wed, 2013-10-02 at 11:07 -0500, Andrew Deason wrote:
 On Wed, 02 Oct 2013 11:43:42 -0400
 Jeffrey Hutzelman jh...@cmu.edu wrote:
 
  On Wed, 2013-09-25 at 11:42 -0500, Andrew Deason wrote:
  
   if 15640 still occurs, that's a bug
  
  15640 was not a bug in OpenAFS when it was submitted 9 years ago, and
  it's still not a bug in OpenAFS.  If you want multi-homed dbservers to
  work, then the primary addresses listed for each server in Ubik's
  CellServDB must be the one your operating system will actually use
  when sending packets to other servers' primary addresses.  Otherwise,
  they will not be recognized as coming from a legitimate server.
 
 I don't see how that could be the case currently, or why that would be
 necessary. SDISK_UpdateInterfaceAddr does not pay attention to the
 address where the packets are actually coming from, only the addresses
 given in the RPC arguments. Is there something else you're thinking of
 that creates a limitation like you describe?

Hrm.  I'd have to do some further digging -- that analysis was based in
no small part on what I wrote in the ticket back in 2004, and it doesn't
look like there was ever a reply to that.  I agree that
SDISK_UpdateInterfaceAddr doesn't use the address of the incoming
connection and never has.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: vos shadow to backup user homes

2013-08-26 Thread Jeffrey Hutzelman
On Mon, 2013-08-26 at 10:28 -0500, Andrew Deason wrote:
 On Sun, 25 Aug 2013 21:05:41 +0530 (IST)
 Shouri Chatterjee sho...@ee.iitd.ac.in wrote:
 
  I wanted to ask about vos shadow and whether it is being used as a
  solution on production systems to back-up user home directories.
 
 I believe it is, but I'll let others speak if they are doing so.

I'll point out at this point that vos includes both high- and low-level
operations in one command-line tool, and sometimes it is hard to tell
the difference.  The high-level operations are things like create, move,
backup, release, and remove, which are intended to be used for everyday
administration.


Low-level operations like changeloc, clone, delentry, shadow, and zap
are intended for unusual situations or as building blocks for use in
more complex operations.

vos shadow by itself was not intended as a backup mechanism of any
kind.  Rather, it was intended to be used in constructing a system that
would create and track entire shadow servers, which could be easily
brought online if a real server failed.  I never built that larger
system, but perhaps others have done so.


 Yes, one of the downsides of shadow volumes is that using them is not
 documented as well as other features, and they aren't tested as much. 

Because there actually is no such feature as a shadow volume.  Manual
pages notwithstanding, the vos shadow command does not create a
shadow volume; it merely provides a way to do the same sequence of
cloning and forwarding full and incremental dumps that is used by the
move and copy commands, but without deleting the source volume or making
any changes to the VLDB.

-- Jeff


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] reading files from problem volume

2013-08-22 Thread Jeffrey Hutzelman
On Thu, 2013-08-22 at 14:24 +, sabah s. salih wrote:
 Dear All,
   We have the following case. Is there away where we could recover files 
 from this volume please.
 
 
 # vos exam 536873829
 vsu_ClientInit: Could not get afs tokens, running unauthenticated.
 Could not fetch the information about volume 536873829 from the server
 : No such device
 Volume does not exist on server afs21.hep.manchester.ac.uk as indicated by 
 the VLDB
 
 Dump only information from VLDB
 
 kiran
 RWrite: 536873829
 number of sites - 1
server afs21.hep.manchester.ac.uk partition /vicepc RW Site
 [root@afs21 vicepcc]#
 
 [root@afs21 vicepcc]# ls /vicepcc/V0536873829.vol
 /vicepcc/V0536873829.vol

So, the VLDB says the volume is on /vicepc, but you salvaged a volume
on /vicepcc.  The fileserver is not very clever about the same volume
appearing on multiple partitions; if there _is_ a volume header
on /vicepc, then the server is likely trying to online that one instead
of the one on /vicepcc.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] scan client version

2013-08-01 Thread Jeffrey Hutzelman
On Thu, 2013-08-01 at 12:30 -0400, Jeffrey Altman wrote:


 The rxkad-kdf change does not get rid of 1DES.  It simply permits the
 afs cell key to be a non-1DES key.  All wire encryption and the actual
 rxkad challenge/response is still performed using 1DES.

Actually, that's not strictly true.  Using rxkad-kdf effectively does
eliminate use of DES.  As always, wire encryption and challenge/response
are performed using fcrypt, not DES.  Not that this should make anyone
feel better...

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Heimdal KDC bug mentioned in rekeying document

2013-07-30 Thread Jeffrey Hutzelman
On Tue, 2013-07-30 at 19:44 -0400, Jeffrey Altman wrote:
 On 7/30/2013 7:32 PM, Benjamin Kaduk wrote:
  On Tue, 30 Jul 2013, Jeffrey Altman wrote:
  
  This is an incorrect description.  The explicit problem occurs when the
  following combination is true:
 
  1. user has one or more strong enctype keys with non-default
 password salts
 
  2. the only keys with default password salts are weak enctypes
 
  3. preauth is required
  
  A bit off-topic (and feel free to go off-list), but I'm curious if there
  is anything that can be said in general to be a cause for the presence
  of non-default salts.
  
  Thanks,
  
  Ben
 
 Realm or principal renaming without updating the keys.  This is not
 specific to Heimdal.

Also, some realms contain keys that date back to when they were running
krb4; these have non-default salts, according to krb5's way of thinking.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Heimdal KDC bug mentioned in rekeying document

2013-07-26 Thread Jeffrey Hutzelman
On Fri, 2013-07-26 at 10:57 +0200, Sergio Gelato wrote:

 Speaking of which, is anyone known to be working on rxkad-kdf support for
 Heimdal's libkafs? I'd like kinit --afslog to do the right thing.

It's on my todo list, but I won't complain if someone else gets there
first.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Heimdal KDC bug mentioned in rekeying document

2013-07-25 Thread Jeffrey Hutzelman
On Thu, 2013-07-25 at 09:11 -0400, step...@physics.unc.edu wrote:
 Hi,
 
 In the cell rekeying instructions found at 
 http://openafs.org/pages/security/how-to-rekey.txt, there is a note for 
 sites using Heimdal KDCs. It mentions a bug present in certain versions 
 of the Heimdal KDC software which completely disables DES on the AFS 
 service principal when following the document's instructions.
 
 Is more information available about specific versions of the Heimdal KDC 
 software which exhibits this bug? The document mentions experimentally 
 verifying ticket acquisition, which seems wise. But also knowing the KDC 
 versions which have the bug would be beneficial.
 
 Anyone have this info? Should I post to a heimdal list instead?

The bug in question essentially means that issued service tickets will
always have the same service and session key enctypes, so you must
choose between sticking with DES and breaking all existing
token-acquiring clients which do not have the new rxkad-kdf code
introduced in OpenAFS 1.6.5 and 1.4.15.  If I correctly remember my trip
through the git repositories on Tuesday evening, the problem was most
recently fixed prior to Heimdal 1.5.0, so if you are running that
version you should not have a problem.

To test, first perform the upgrade as described, but be careful that the
new key set includes DES keys.  A Heimdal KDC will not issue tickets
with DES session keys if the service does not have a DES key in the
Kerberos database.  Once you've installed the rxkey.keytab files on all
of your servers and made the new keys available in the Kerberos
database, get fresh tickets and run aklog to get AFS tokens.  Then run
'klist -v' and look at the entry for your AFS tickets.  If you have an
entry like the one below, showing both a non-des Ticket etype and a
DES Session key, then everything is working.  If it shows only a DES
Ticket etype and no separate Session key line, then your KDC has the
bug.


Example klist -v output (partial):
 Server: a...@cs.cmu.edu
 Client: jh...@cs.cmu.edu
 Ticket etype: des3-cbc-sha1, kvno 2
 Session key: des-cbc-crc
 Ticket length: 237
 Auth time:  Jul 25 11:55:20 2013
 Start time: Jul 25 11:55:21 2013
 End time:   Jul 26 13:21:41 2013
 Ticket flags: transited-policy-checked, pre-authent, proxiable, forwardable
 Addresses: addressless



I'm afraid I can't say which all versions are affected.  Searching
through the tree I was able to find the bug fixed at least twice, once
in 1997 and once in 2011.  It was first reintroduced sometime in 1998 or
1999, but the comments on the 2011 commit lead me to believe that in the
interim, it was at one point fixed and then reintroduced again.  So,
there are likely at least three ranges of heimdal versions which contain
this bug, the most recent of which ends prior to version 1.5.0.
[

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: OpenAFS 1.7.26 windows and not changed AFS service principle - OK?

2013-07-25 Thread Jeffrey Hutzelman
On Thu, 2013-07-25 at 11:38 -0500, Andrew Deason wrote:
 On Thu, 25 Jul 2013 11:36:52 -0400 (EDT)
 Benjamin Kaduk ka...@mit.edu wrote:
 
  and in the absence of other information, the KDC should not assume
  that a service supports an enctype for which it has no long-term key.
 
 After thinking about this, it seems like we could make this more robust,
 if the KDC doesn't do this. The behavior we're desiring is that a KDC
 just _prefers_ using session key enctypes where it has an associated
 long-term key, if the client doesn't specify an enctype.

Huh?  No, the client doesn't specify an enctype; it provides a list of
the enctypes it supports.  If the list is empty, the authentication will
fail.  At the API layer, Kerberos libraries generally offer the ability
for an application not to specify particular enctypes; what this means
is that the library sends a list of everything it supports (or, in some
circumstances, perhaps the intersection of everything it supports with
everything in this keytab).

The text in RFC4120 is unfortunately scattered and a bit vague, but the
intent is that the KDC must select an enctype from the client-provided
list.  Further, it must select an enctype which is supported by the
target service.  Both MIT and Heimdal determine this based on the list
of enctypes stored for that service in the Kerberos database.  So, the
selected session key must use an enctype that is both on the client's
list _and_ in the service's list of long-term keys.



 if a client specifically requests e.g. a DES session key when the
 principal only has an AES long term key, we do get the DES session key
 (unless DES has been disabled kdc-wide or whatever).

This happens only with an MIT Kerberos KDC, which assumes that services
support DES-CBC-MD5 even when they have no keys of that type.  This is a
reasonable assumption because implementation of DES-CBC-MD5 is
mandatory.


However, this thread is about Windows, not MIT or Heimdal.


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Multi-homed server and NAT-ed client issues

2013-07-17 Thread Jeffrey Hutzelman
On Wed, 2013-07-17 at 17:43 +0300, Ciprian Dorin Craciun wrote:
 Hello all!  I've encountered quite a blocking issue in my OpenAFS
 setup...  I hope someone is able to help me... :)
 
 
 The setup is as follows:
 * multi-homed server with, say S-IP-1 (i.e. x.x.x.5) and S-IP-2
 (i.e. x.x.x.7), multiple IP addresses, all from the public range;

Things get much easier if you just use the actual names and addresses,
instead of making up placeholders.  Frequently, doing that sort of thing
hides critical information that may point to the source of the problem.
For example, in this case, Linux's choice of source IP address on an
outgoing UDP packet sent from an unbound socket (or one bound to
INADDR_ANY) will depend on the interface it chooses, which will depend
on the route taken, which depends on the server's actual addresses and
the network topology, particularly with respect to the client (or in
this case, to the public address of the NAT the client is behind).

You also haven't said what version of OpenAFS you're using, so I'll
assume it's some relatively recent 1.6.x.


 * the second IP, S-IP-2 (i.e. x.x.x.7), is the one listed in
 `NetInfo` and DNS record (and correctly listed when queried via `vos
 listaddrs`);
 * the first IP, S-IP-1 (i.e. x.x.x.5), is listed in
 `NetRestricted` (and doesn't appear in `vos listaddrs`);

So, the machine the fileserver runs on is multi-homed, but you're only
interested in actually using one of those interfaces to provide AFS
service?  In that case, you use the -rxbind option, which tells the
servers to bind to a specific address instead of INADDR_ANY.  That
option needs to be passed to each server process for which you want that
behavior.


 Thus my question is how can I resolve such an issue?

Besides -rxbind, there are a couple of other options, depending on which
components you control.  For example, if the NAT is your home router and
you only have one or two AFS clients behind it, you can assign those
clients static addresses on your inside network, and then configure your
router to remap the client-side addresses on both inbound and outbound
traffic, mapping each inside host's port 7001 to a different outside
port.  For example, my router (running OpenWRT) installs the following
rules:

### Static ports for AFS
for i in `seq 50 249` ; do
  iptables -t nat -A prerouting_wan  -p udp --dport $((7000+$i)) \
-j DNAT --to 192.168.202.${i}:7001
  iptables -t nat -A postrouting_rule -o $WAN -p udp -s 192.168.202.$i
--sport 7001 -j MASQUERADE --to-ports $((7000+$i))
done
iptables -A forwarding_wan -p udp --dport 7001 -j ACCEPT

(in OpenWRT's default configuration, the 'forwarding_wan' and
'prerouting_wan' chains get called from the FORWARD and nat PREROUTING
chains, respectively, for traffic originating from the internet.  The
'postrouting_rule' chain gets called from the nat POSTROUTING chain for
all traffic).

So, when 192.168.202.142 sends traffic to a fileserver from port 7001,
it comes from the routers port 7142.  And inbound traffic to that port
gets sent back to 192.168.202.142 port 7001, regardless of where on the
Internet it came from or whether the router knows about the connection.
As you can see, I do this for a range of 200 addresses, which are the
ones my DHCP server hands out -- anyone who visits my house gets working
AFS, without keepalives, and even when talking to a multihomed server.






 
 I must say I've tried to `iptables -j SNAT ...` outgoing packets
 to the right S-IP-2, however this doesn't work because SNAT also
 changes the source port.  I've also tried to `-j NETMAP` these
 packets, but it doesn't work because NETMAP in the `OUTPUT` or
 `POSTROUTING` tables actually touch the destination...  Thus if
 someone knows of an `iptables`...

Well, you can give SNAT a specific port to use.  Or, you can play games
with routing tables to give AFS traffic a routing table that doesn't
include the second interface.  But that's functionally equivalent to
using -rxbind but a lot more work.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Re: [AFS3-std] Changing RXAFS_GetVolumeStatus access check to support volume lock down

2012-07-05 Thread Jeffrey Hutzelman
On Wed, 2012-07-04 at 11:14 -0400, Jeffrey Altman wrote:
 The RPC that is used to obtain the volume statistics from the file
 server is RXAFS_GetVolumeStatus.  This RPC returns a subset of the
 information displayed by vos examine volume but is intended for use
 by AFS clients.

Well, not entirely.  IIRC, it's quite a lot older than that, and
predates the concept of a volserver or volume location server.  In those
days, the fileserver was the _only_ server that touched volumes; doing
administrative volume operations required separate utilities on the
fileserver, and moving volumes must have been a royal pain.



 I believe that the permission check enforced by the file server is
 incorrect.  The correct permission check should be for PRSFS_LOOKUP and
 not PRSFS_READ.  If the client can enumerate the root directory of the
 volume it should be able to obtain the volume statistics.  Not that they
 are used any longer but what use is setting the Message of the Day and
 the Offline reason messages on a volume if a subset of the clients that
 are permitted to access the volume cannot read them?

In fact, the offline reason message is only ever set on an offline
volume, which the fileserver cannot even access.

I think it is fine to skip access control checks on this call entirely.
As you point out, the information available via this RPC is also
available to unauthenticated clients via the volserver.


I do not believe this is a standardization issue.  The meaning of some
access control bits _as they apply to vnodes_ must be standardized, as
clients rely on those bits when implementing access controls on cached
objects shared between users.  And of course, their representation on
the wire must be standardized in order for the tools and interfaces used
to manipulate vnode access controls to interoperate.

However, the precise application of access controls to non-cacheable
operations, volume-level operations, and administrative operations is
not standardized and does not need to be standardized in order to obtain
interoperability.  Thus, I believe the present question is entirely a
matter for the implementation and, perhaps, local policy.

As such, I've moved afs3-standardization to the Bcc line.  Please feel
free to move it back in replies, but only if you actively disagree with
my position that this is not a standardization issue.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Consensus Call - AFS3-Standardization Charter

2010-07-07 Thread Jeffrey Hutzelman

IMPORTANT:
This has gotten fairly lengthy, but please read through to the end.  This 
message contains important information on the future of AFS protocol 
standardization work, and a specific request for input from the AFS 
community (that is, YOUR input) within the next 2 weeks.


PLEASE send followups to afs3-standardizat...@openafs.org


Back in January of 2006, the afs3-standardizat...@openafs.org mailing list 
was created in order to provide a forum for discussion of the AFS protocols 
and particularly to coordinate extensions and changes to those protocols 
among the various implementations.  The first discussions in that vein 
started the following month, with Jeffrey Altman's proposal to define new 
GetCapabilities RPC's for each of the various RPC services.  Since then, 
there have been discussions on a wide variety of proposed extensions, some 
small and some much larger in scope.  Overall, I consider the mailing list 
to have been and continue to be a success.


Two years ago, at the AFS  Kerberos Best Practices Workshop at NJIT in 
Newark, NJ, there was some discussion about the prospect of establishing a 
more formal charter and process for the standardization group, and 
especially of insuring its independence from any one implementation.  After 
the workshop, Simon Wilkinson took a stab at writing such a charter, and 
sent his proposal to the afs3-standardization mailing list (see Simon's 
message to that list, dated 15-Jul-2008).  This prompted quite a lot of 
discussion and two additional drafts over following couple of months. After 
the third draft, there was exactly one additional comment, and there has 
been no further discussion since.


It is my personal belief that there was general agreement within the 
community to move forward with Simon's draft as an initial charter for the 
standardization group.  However, there has been little progress in the last 
21 months.  Much of this is my fault -- I kept saying I was going to do 
something and then not getting around to it.  However, while the document 
hasn't been discussed much in the interim, my conversations during that 
time with various individuals, in person and online, lead me to believe 
that there is _still_ general agreement to proceed with Simon's draft.




So, here's what I'm going to do about it...

Simon's document calls for a bootstrapping process in which a registrar 
group is form of the then-current registrar (myself) plus one 
representative from each current implementation (IBM, OpenAFS, kAFS, Arla) 
that cares to provide one.  The registrars would then serve as vote-takers 
in an initial election of two chairs as described in section 2.2.2 of the 
draft.


The initial bootstrapping of the registrars has already mostly taken place. 
Thomas Kula has agreed to serve as a registrar representing OpenAFS, and 
has held that position officially since the 2009 workshop.  Around that 
time, I asked IBM, kAFS, and Arla to nominate registrars, but I have yet to 
receive a response that resulted in an actual volunteer.  If any of those 
organizations wants to nominate someone, please contact me.  Otherwise, 
Thomas and I have already agreed that we will nonetheless increase the size 
of the registrar group to at least three and seek out a volunteer to fill 
the vacant position.  It is my hope that we can accomplish that by the end 
of the month.


The next step would seem to be the bootstrapping of the chairs.  However, 
we have a recursive-dependency problem here -- before we can use the 
election process defined in Simon's document with any confidence, we must 
be sure we have consensus among the community to use that document. 
However, lacking a chair, there is no formal means of determining consensus.

Chicken, meet Egg.

Simon's document itself proposes part of the solution to this problem, in 
the form of the last paragraph of section 3, which calls on the 
newly-formed group to develop, adopt, and publish its own charter.  To 
complete the solution, the registrars note that the first step (indeed, the 
first several steps) in electing new chairs rest on our hands.  Thus, we 
are taking the following actions:



(1) I have asked Simon to submit the latest version of his proposed charter
   in the form of an Internet-Draft.  That draft is now available at
   http://tools.ietf.org/html/draft-wilkinson-afs3-standardisation-00

(2) On behalf of the registrars, I am issuing this consensus call.  This
   is an attempt to elicit comments and to discover whether there is
   rough consensus in the AFS community to begin formalizing the protocol
   standards process as described in the draft named above.  I am asking
   everyone to review the proposed charter and send any comments to the
   mailing list, afs3-standardizat...@openafs.org, within the next 2
   weeks.

(3) On or shortly after Wednesday, July 21, 2010, the registrars will
   examine the comments received and make a determination as to whether
   we believe such a 

Re: [OpenAFS-devel] Re: [OpenAFS] Re: 1.6 and post-1.6 OpenAFS branch management and schedule

2010-06-21 Thread Jeffrey Hutzelman
--On Friday, June 18, 2010 04:17:19 PM -0400 Tom Keiser 
tkei...@sinenomine.net wrote:



On Fri, Jun 18, 2010 at 2:56 PM, Chas Williams (CONTRACTOR)
c...@cmf.nrl.navy.mil wrote:

In message 20100618093541.46bc13bc.adea...@sinenomine.net,Andrew
Deason writes:

It's pretty easy to make a supergroup if it's turned on; you may not
realize it's a specific feature to turn on. Once you have done so, your
ptdb is now incompatible with ptservers without supergroups enabled.


this might happen if you ran mis-matched servers.  but best practices
would tell you this is a bad idea.



I think it's considerably worse than that: let's suppose,
hypothetically, that it turns out there's a serious bug in the
supergroups code, and the easiest solution is to downgrade to a
non-supergroups enabled build.  Well, unless you know how to hex edit
your prdb to remove the group-in-group membership pointers, you're
effectively out of luck...


No; it's much worse than that.  Suppose you upgrade to a new version of 
OpenAFS and find there's a serious bug in the fileserver, or ubik, or rx. 
Or you missed some crucial process step and so it wasn't OK to upgrade. 
Or someone decides that your upgrade was the cause of their hard disk 
failing.  So now you want/have to upgrade.


Except the new version of AFS in question had supergroups enabled by 
default, and you didn't notice, and some user went and created a 
supergroup.  So now you can't back out, because your database is no longer 
compatible with what you were running before.  Perhaps you don't notice 
until you actually tried to run the old code, and it just didn't work.  You 
don't know why it's not working; you may not even notice right away that 
you don't have a ptserver -- maybe you only notice the next day when 
someone can't access any of their protected files.


OK, perhaps that's a bit extreme.  But maybe not.  It's not clear to me 
that we ever need to reach a point where existing cells upgrading to new 
code should automagically get supergroups support.  Sure, it should 
eventually be turned on by default in a newly-created prdb, but let's not 
unnecessarily break things for people who just want to keep their 
filesystem working, and especially for people who just want to not be 
forced by management to abandon AFS in favor of everyone just giving all of 
their files to Google.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: [OpenAFS-devel] 1.6 and post-1.6 OpenAFS branch management and schedule

2010-06-18 Thread Jeffrey Hutzelman
--On Thursday, June 17, 2010 11:59:29 AM -0700 Russ Allbery 
r...@stanford.edu wrote:



I'm quite sure that, after an unclean crash, your Windows server doesn't
remount the file system without doing a consistency check.  No operating
system treats its file systems that way.


MS-DOS did.  Of course, that hardly qualifies as an operating system.
Modern Windows definitely _does_ do a filesystem consistency check after a 
crash.  It's a bit better than most traditional UNIX systems about also 
automatically _repairing_ any problems it finds, so you have a fairly good 
chance of your system actually coming up afterward.



-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS-devel] Re: [OpenAFS] Re: 1.6 and post-1.6 OpenAFS branch management and schedule

2010-06-18 Thread Jeffrey Hutzelman
--On Thursday, June 17, 2010 04:12:48 PM -0500 Andrew Deason 
adea...@sinenomine.net wrote:



On Thu, 17 Jun 2010 15:54:25 -0500
Andrew Deason adea...@sinenomine.net wrote:


And as has been mentioned elsewhere in the thread, you need to wait for
the VG hierarchy summary scan to complete, no matter how fast salvaging
is or how many you do in parallel. That involves reading the headers of
all volumes on the partition, so it's not fast (but it is very fast if
you're comparing it to the recovery time of a 1.4 unclean shutdown)


Also, while I keep talking about this, what I haven't mentioned is that
it may be solvable. Although I've never seen any code or even a
complete plan for it yet, recording the VG hierarchy information on disk
would obviate the need for this scan. Doing this would allow you to
salvage essentially instantly in most cases, so you might be able to
recover from an unclean shutdown and salvage 100s of volumes in a few
seconds.


It's also worth noting that in a namei fileserver, each VG is actually 
wholly self-contained, so there is no reason in the world why you should 
have to scan every VG on the partition before you can start salvaging any 
of them.  The salvage server design really should take this property into 
account, as it seems likely that some future backends may also have this 
property.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: [OpenAFS-devel] 1.6 and post-1.6 OpenAFS branch management and schedule

2010-06-18 Thread Jeffrey Hutzelman
--On Thursday, June 17, 2010 01:45:14 PM -0500 Christopher D. Clausen 
cclau...@acm.org wrote:



I have heard that, but I have never experienced any problems myself in
many years of running that way.  In general the way I see it is that if
the power goes out, my server stays up for a little longer due to its UPS
but the network dies immediately so the AFS processes are not doing
anything when the power finally dies and the server goes down a few
minutes later.  (This is of course assuming no actual server crashes and
luckily I haven't had any of those.)


We're a bit more agressive over here.  If the power goes out, my servers 
stay up for a little longer due to the UPS.  So does the machine room 
network.  And the rest of the machine room.  And the clients.  And _their_ 
network.  See, a few years ago a dean decided that it was unacceptable that 
a power outage had killed one of his desktop machines (the hardware, that 
is).  So, we raised the rates a bit, bought UPS's for every machine in 
every office, and after the first couple of years started a rotating 
replacement schedule.  It's really _very_ nice, but it does mean you can't 
count on the clients to die before the servers.


Really, I consider enable-fast-restart to be extremely dangerous.
It should have gone away long ago.

I realize some people believe that speed is more important than not losing 
data, but I don't agree, and I don't think it's an appropriate position for 
a filesystem to take.  Not losing your data is pretty much the defining 
difference between filesystems you can lose and filesystems from which you 
should run away screaming as fast as you can.  I do not want people to run 
away screaming from OpenAFS, at any speed.


Bear in mind that enable-fast-restart doesn't mean start the fileserver 
now and worry about checking the damaged volumes later.  It means start 
the fileserver now and ignore the damaged volumes until someone complains, 
by which time it may be months later and too late to recover the lost data 
from backups.  It may also mean worse.



Also bear in mind that we're talking about a change after DAFS is good 
enough to be on by default, at which point restarts will _already_ be fast, 
even if you salvage everything that needs it up front, because not every 
volume will have been online at the time of the crash.





I guess I don't understand the particulars of what could happen, but if
one is really worried about sending corrupt data, wouldn't the best thing
to do be check the data as it is being sent and return errors then and
log that something is wrong, not require an ENTIRE VOLUME to be salvaged,
leaving all of the files inaccessible for a potentially long period of
time?  I assume that such a thing is not possible to do?


That's right; it's not possible to do.  We're not talking about verifying 
the (nonexistent) checksums we (don't) keep on data.  We're talking about 
verifying that the filesystem structure is self-consistent, so we don't 
have things like two unrelated directory entries pointing at the same 
vnode, or two vnodes pointing at the same underlying file, or whole volumes 
whose contents are unreachable because some directory entry is missing. 
And, we're talking about discovering cases where data has already been lost 
or destroyed, in time to maybe do something about it.


People often complain that the salvager destroys their data, or that fsck 
destroys there data.  This is almost never true.  What these programs do is 
discover that your data has already been destroyed, and repair the tear in 
the space-time continuum so that it is safe to keep using and changing 
what's left.



___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: [OpenAFS-devel] 1.6 and post-1.6 OpenAFS branch management and schedule

2010-06-18 Thread Jeffrey Hutzelman
--On Thursday, June 17, 2010 11:38:18 PM +0100 Simon Wilkinson 
s...@inf.ed.ac.uk wrote:




On 17 Jun 2010, at 21:40, Russ Allbery wrote:


There is that.  I intend to ship with DAFS enabled for Debian, but the
Debian packages have always taken a fairly aggressive approach to
enabling features.  (They have had supergroups enabled for quite some
time, for example, and also enable UNIX domain sockets for fssync, and I
intend to enable disconnected as well.)


This is one of the many problems with having too many knobs that can be
twiddled. There is no longer OpenAFS - you end up with Debian's build
of OpenAFS which behaves in a completely different way to the RPMS on
the OpenAFS website, which use completely different paths from the
RPMFusion RPMS, which behave differently again to the Solaris tarball
and so on. If you're a new user to OpenAFS, how on earth do you work out
which set of settings you should be using? Do you know that you're using
the Demand Attach Fileserver, or whether your package is build with
Transarc paths, or what the difference between inode and namei is? At
some point, you'll just give up in disgust.


People who have been at this a really long time, like the GNU project, are 
also very agressive about not having this situation occur.  Absolutely 
anything that can be determined at runtime should be.  The exceptions tend 
to be things that only work if you have some external library to build 
against, cases where it's extremely important to be able to limit binary 
size (for example, monolithic kernels, or programs designed to be used in 
embedded devices), and cases where it's just not realistic to switch 
between two alternatives at runtime.


A good example of the last case, in AFS, is the fileserver storage backend. 
Currently we have somewhere between two and four such backends, depending 
on how you count (some of them are _very_ similar, and otherwise platform 
dependent).  It would be nice to be able to switch between them at runtime, 
or even support multiple backends in the same fileserver.  In fact, not 
having that capability is a major obstacle to transition on platforms that 
support both inode and namei fileservers.  However, it's Really Hard, 
because the code structure assumes there is just one backend and all sorts 
of related behaviors are determined at compile time.  Maybe someday.


At the moment, DAFS is also one of those cases.  Fixing this will require 
quite a bit of work, but I also think it's fairly important.  Hopefully 
someone will find the cycles to make it happen soon.





I'd really like us to standardise on a _small_ (ideally one) set of
supported configurations which we suggest for each release - and for the
binary packages that we point users at to use that set of configurations
across all platforms. It's the only way that we're ever going to manage
to produce a coherent documentation set, to provide meaningful advice on
lists and in chatrooms, and in general, retain our sanity.


+1

-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Re: [OpenAFS-devel] Re: Thinking about 1.6

2009-12-16 Thread Jeffrey Hutzelman
--On Wednesday, December 16, 2009 01:46:04 PM -0500 Derrick Brashear 
sha...@gmail.com wrote:



bos exec still works unless you give the restricted command line
switch. if you turn on random options without reading what you're
doing, you get what you paid for.


Perhaps you missed the part where Simon advocated making the new behavior 
the default?



Are we / how long are we keeping the inode fileserver backend around?


for sites with solaris 8, might as well let them upgrade to 1.6.
anyone else, well, i hope they aren't still using it.


We're still running it, on Solaris 9, and are in no hurry to change. 
Making this backend go away in 1.6, such that people are forced to change 
backends in order to upgrade their fileservers, seems too soon to me.


Making it the default behavior might be OK, provided we add code to make 
the fileserver recognize a vice partition containing existing inode volumes 
and refuse to start.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: [OpenAFS-devel] Re: Thinking about 1.6

2009-12-16 Thread Jeffrey Hutzelman

--On Wednesday, December 16, 2009 02:37:32 PM -0500 omall...@msu.edu wrote:


Solaris 8/9 hit the darn near unsupported list from Sun.
By the time 1.6 reaches production there won't be anyone running it at
least on production hardware.


HA HA HA you are so funny

You must think that people who run production services have nothing to do 
all day but buy new hardware and put new operating systems on it and 
migrate their services.


Also, please bear in mind that Solaris 8/9 is not one version, and 
support timelines for 8 and 9 are not the same.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Re: [OpenAFS-devel] Thinking about 1.6

2009-12-16 Thread Jeffrey Hutzelman
--On Wednesday, December 16, 2009 06:04:58 PM + Simon Wilkinson 
s...@inf.ed.ac.uk wrote:



*) Remove the --disable-afsdb switch, and associated #ifdefs, so AFSDB
comes as standard.


As long as we don't remove the ability to turn it off at runtime.  I just 
had a conversation today with someone who needs to run a client with a 
restricted set of configured cells, and part of the solution is turning off 
afsdb support on that client.




*) Remove the --enable-bos-restricted switch, and associated #ifdefs and
make this behaviour the default - it's still controllable from the
command line, and the
default case is safe.


I'm not convinced this should be the default.



*) Remove --enable-disconnected switch, and default the code to on. This
code has had a fair amount of testing, and there are currently no
performance issues with having it enabled by default. However, there are
still usability issues with the implementation.


If there are usability issues, why turn it on by default?


*) Make demand attach the default, but provide --disable-demand-attach-fs
to allow old-style fileservers to still be built


Uh...  I'm sure the people working on demand-attach would love this, but 
doing it requires making a decision that we won't release 1.6 until this 
feature is actually stable enough for _everyone_ to use on production 
servers, including people who don't know what they're getting themselves 
into.  I don't think we're there yet.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: [OpenAFS-devel] Thinking about 1.6

2009-12-16 Thread Jeffrey Hutzelman
--On Wednesday, December 16, 2009 12:40:18 PM -0800 Russ Allbery 
r...@stanford.edu wrote:



Buhrmaster, Gary g...@slac.stanford.edu writes:


Many (linux) packaging systems will just replace older versions without
a discussion with the installer about what else they need to change (it
is actually a pet peeve of mine that there is nothing equivalent to the
SMP/E HOLD(DOC) capability(*) in most packaging systems).


The package should recognize that an upgrade is happening and adjust the
bos configuration accordingly.


How do you propose to automate that, given that the existing configuration 
could provide arbitrary arguments or even use arbitrary binaries for the 
various fs bnode commands?


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: [OpenAFS-devel] Thinking about 1.6

2009-12-16 Thread Jeffrey Hutzelman
--On Wednesday, December 16, 2009 11:24:10 PM + Simon Wilkinson 
s...@inf.ed.ac.uk wrote:




On 16 Dec 2009, at 23:03, Jeffrey Hutzelman wrote:


How do you propose to automate that, given that the existing
configuration could provide arbitrary arguments or even use
arbitrary binaries for the various fs bnode commands?


If you're using my package, you'd better be using my binaries. If you've
changed things behind rpm's back, then you pretty much deserve what you
get - that's true of many, many RPMs.


Configuration of the bosserver does not belong to your package, and is 
legitimate for me to change.  For example, instead of running the salvager 
directly, I might run a wrapper that arranges for salvager logs to be dated 
(we did this for years before -datelogs appeared).

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Re: [OpenAFS-devel] Re: Thinking about 1.6

2009-12-16 Thread Jeffrey Hutzelman
--On Wednesday, December 16, 2009 11:25:05 PM + Simon Wilkinson 
s...@inf.ed.ac.uk wrote:




On 16 Dec 2009, at 23:03, Jeffrey Hutzelman wrote:


--On Wednesday, December 16, 2009 01:46:04 PM -0500 Derrick Brashear
sha...@gmail.com wrote:


bos exec still works unless you give the restricted command line
switch. if you turn on random options without reading what you're
doing, you get what you paid for.


Perhaps you missed the part where Simon advocated making the new
behavior the default?


Even if bos restricted is enabled at compile time, you don't see any
changes unless you run bos setrestricted against a bos server. That's
what I'm advocating - making all bosserver binaries (and all bos clients)
support restricted mode, but _not_ enabling it by default at run time.


OK, that's fine.  I must have misinterpreted your earlier message.

-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Re: [OpenAFS-devel] exposing RPC code-name mappings via rxgen extension, a library, and a new utility

2009-01-15 Thread Jeffrey Hutzelman
--On Thursday, January 15, 2009 02:00:09 PM -0500 Steven Jenkins 
steven.jenk...@gmail.com wrote:



I would like to expose RPC code-name mappings so that other programs
within OpenAFS can avoid hard-coding the mappings, as well as be able
to export them to the users (who might find them useful in debugging
network traces, for example, where their particular tool does not know
what a particular opcode corresponds to). From a user-level, it would
work as follows:

$ translate_rpc -name PR_INewEntry
500

It would accomplish this by extending rxgen to pul the procedure
identifier and opcode from the specification file: e.g., given the
following hunks of code:

package Package_ident
...
 Procedure description option:

[proc] [Procedure_ident] [ServerStub_ident]
Argument list [split | multi]
[= Opcode_ident] ;

would produce new tables which would automatically go into the .h file
for that specification file: e.g.,

Package_ident_name[Opcode_ident] = Procedure_ident
and
Package_ident_opcode[Procedure_ident = Opcode_ident


Sorry, this is about as clear as mud, perhaps because the above isn't valid 
C and certainly isn't a declaration, and perhaps because all the extraneous 
_ident is confusing me.  You sound like you're proposing creating a pair 
of arrays to be used as lookup tables, but this has a couple of problems:


1) Translation of opcodes to names could be done by an array lookup, but it 
shouldn't be, because the required array will generally be quite large and 
very sparse.  Instead, you should emit a _function_ which uses the same 
logic as the ExecuteRequest function already emitted by rxgen, and which 
handles large gaps in the opcode space in a reasonably efficient way.


2) Translation of names to opcode cannot be done by an array lookup, 
because this is C, not awk or python, and strings cannot be used as array 
indices.  Again, I recommend actually emitting a function which does the 
required translation.  This won't be like anything currently in OpenAFS, 
but shouldn't be too hard to construct.  I recommend looking at using gperf 
to generate a perfect hash table for each set of procedures.



It should be possible to get rxgen to produce these functions for any 
interface, and preferably in a separate file from any of its other outputs, 
so that they may be collected together into a library that has no other 
dependencies.  I would also very much like to see a mode in which rxgen 
emits a simple table of opcode numbers and procedure names, one per line. 
This would be useful in constructing a standalone lookup tool that reads 
one table per RPC interface (similar to something I've already done for 
com_err tables), and may also be of use to the registrars in constructing 
some of the procedure number tables we currently don't have.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Re: [OpenAFS-devel] interface for vos split

2009-01-09 Thread Jeffrey Hutzelman
--On Thursday, January 08, 2009 12:32:20 PM -0800 Russ Allbery 
r...@stanford.edu wrote:



Steven Jenkins steven.jenk...@gmail.com writes:


fs getfid (like virtually all of the fs subcommands) is implemented by
marshalling arguments and then making a PIOCTL call into the kernel.
Without a cache manager, you can't get a response to that PIOCTL.  Even
with a cache manager running, you would need to marshal up the arguments
and make a PIOCTL, which means linking vos.c in with new libraries.  vos
is already huge; I think making it understand how to do PIOCTL calls
would be significant enough to where we would look at getting rid of fs
entirely (i.e., if vos can do one PIOCTL, adding the rest is relatively
straightforward).


These are all reasonable arguments from a code perspective


Except, of course, that vos already depends on libsys, and already contains 
the (relatively trivial) code required to make AFS system calls, including 
pioctl.  The additional code required to call VIOCGETFID is something like 
half a dozen lines:


struct ViceIoctl blob;
struct VenusFid fid;
blob.out = blob;
blob.out_size = sizeof(blob);
blob.in_size = 0;
pioctl(path, VIOCGETFID, blob, 1);



Thus it seems to me most straightforward from a user-experience
viewpoint to require the vnode.


Straightforward, but difficult to use.  Vnode numbers are an implementation 
detail that we should not depend on making visible to users.




aI think
the above would be less confusing than an implementation that sort of
supports directory names but doesn't in a way that users expect.


Agree
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Implicit A in fileserver

2007-04-13 Thread Jeffrey Hutzelman



On Friday, April 13, 2007 05:14:28 PM -0700 Adam Megacz 
[EMAIL PROTECTED] wrote:




Bill Stivers [EMAIL PROTECTED] writes:

I know that this discussion was beaten 7 ways from Sunday in the
recent past, but I thought it worth asking.  Did someone ever get
around to committing a patch that enabled switching behavior between
implicit a for directory creators versus no implicit a for
directory creators?


This patch adds a configure-time --disable-volume-owner-a which
has the desired effect.


No, it doesn't.  Bill was asking about implicit permissions granted to the 
owner (creator) of a directory, not a volume.


Also, please do not add configure-time options to control behavior.  New 
configure-time options are normally appropriate only when they affect 
things which must be decided at compile-time, such as which platform we're 
building for, what directory layout to use, or which external packages to 
use.  From time to time you will see a configure option to enable a feature 
which represents a substantial code change which is not yet ready to be 
included by default, but those cases are exceptions rather than the rule.



I've also updated the compendium of ways AFS cares about UNIX
owner/group/modebits after staring at afsfileprocs.c for a while:


It might help to start with the idea that AFS is not other filesystems, and 
has its own semantics, rather than being surprised every time AFS's 
semantics are not the same as those of some existing filesystem...


-- Jeff
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Add new fileserver

2007-04-12 Thread Jeffrey Hutzelman



On Thursday, April 12, 2007 10:41:28 AM -0500 Christopher D. Clausen 
[EMAIL PROTECTED] wrote:



chas williams - CONTRACTOR [EMAIL PROTECTED] wrote:

In message [EMAIL PROTECTED],Steve
Simmons write s:

my servers dont start with afs.  is this a common thing?


Most cells I've seen do that. But that particular line is a umich-ism
that ought to be taken out.


looking at CellServDB i wouldnt say most.


Well, certain cells might have a different entry in the CellServDB
versus various CNAMES that are used locally.

[EMAIL PROTECTED] C:\vos listaddrs
cac.illigal.uiuc.edu
gcs.illigal.uiuc.edu
ial.illigal.uiuc.edu

[EMAIL PROTECTED] C:\@for /l %i in (1,1,3) do @vos partinfo
file%i.illigal.uiuc.edu
Free space on partition /vicepa: 6603356 K blocks out of total 6636464
Free space on partition /vicepa: 8994296 K blocks out of total 9125772
Free space on partition /vicepa: 278516032 K blocks out of total
279261540
Free space on partition /vicepb: 547315572 K blocks out of total
547446812
Free space on partition /vicepc: 575637588 K blocks out of total
602187124

(In this instance, I have AFS servers prefixed with file instead of
afs.)



One well-known cell uses 'vice'.

My fileservers are apple, cranberry, grape, tomato, orange, date, fig, 
plum, apricot, cherry, strawberry, pumpkin, kiwi.


Another cell I know has servers named ronald-ann, rosebud, and reynelda 
(with aliases ra, rb, and rc).


Then there's the grand.central.org cell, whose servers are named 
penn.central.org, blueberry.srv.cs.cmu.edu, grand-opening.mit.edu, and 
andrew.e.kth.se.


At yet another site I know, it is possible to tell by looking at a 
machine's hostname whether it is an AFS fileserver, but only if you know 
how to parse their rather cryptic hostnames.


-- Jeff
___
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Maximum # of users

2007-04-11 Thread Jeffrey Hutzelman



On Monday, April 09, 2007 06:51:33 PM -0400 Marcus Watts [EMAIL PROTECTED] 
wrote:



Max user id is 851087 and max group id is -19786.


It's always fun to watch you demonstrate to someone that they're not really 
as big as they think they are.  It helps the rest of us keep a sense of 
perspective. :-)




Older versions of ptserver had an option called CROSS_CELL,
which had some sort of split 16-bit assumption about viceIDs.
We never ran with this at umich.edu, and the code seems to be gone
in modern versions of openafs.


Actually, you're running that code now.  The ptserver had some notion of 
foreign users as far back as the end of 1988, but it wasn't fully developed 
then; in fact, that code wouldn't let anyone create an entry with an '@' in 
its name, ever, though you could apparently create entries with PRFOREIGN 
set and no '@' in the name, even if you were an ordinary user!


The modern form of cross-cell authentication first appeared in AFS 3.2, 
along with the CROSS_CELL macro which enabled it.  Starting in AFS 3.5, 
that functionality was turned on by default, and references to the 
CROSS_CELL macro went away.


Foreign users are identified by the presence of an '@' in their name, and 
can only be created if the corresponding system:[EMAIL PROTECTED] group 
exists.  The group quota on that entry is used to control the number of 
users that can be created from that cell; when it runs out someone needs to 
add more in order to allow more users to be created.


ID's for foreign users are based on the ID of the corresponding 
system:[EMAIL PROTECTED] group.  The low-order 16 bits of a foreign user 
ID are the same as those of the group; the high-order bits start at 1 and 
increment for each new user.  If you have two foreign-cell groups whose 
ID's are the same in the low-order 16 bits, then users from those cells 
will have ID's drawn from the same namespace.  In AFS 3.2, the allocation 
method was primitive; each foreign-cell group has a counter which records 
the next available ID for that cell; if that ID was not available, then no 
new users could be created in that cell until an admin created one with an 
explicit ID (drawn from the correct range).  Today, the counter is still 
used, but only as an optimization; the ptserver starts from there and 
searches for the next available value, similar to what is done for user and 
group ID's.


So, there is a limit of 2^15-1 foreign users per cell, and a nominal limit 
of 2^31-1 users in total.  However, before you reach 2^31-1 users, your 
PRDB will grow too large for Ubik to handle -- the DISK protocol can't 
handle file sizes or offsets larger than 2^31-1, and PTS entries are 192 
bytes, which means you'll max out at around 11 million entries (including 
the extension blocks used when a user or group has more than 10 
memberships).  Of course, that's assuming you don't start having problems 
with recovery first.





Older unix systems had a 16-bit uid limit


Except Ultrix, which had a limit of exactly 32000, above which setuid() 
would fail :-)



-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] com_err hell (WAS: asetkey: failed to set key, code 70354694)

2007-04-11 Thread Jeffrey Hutzelman



On Tuesday, April 10, 2007 03:56:03 PM -0400 Marcus Watts [EMAIL PROTECTED] 
wrote:




Granted, it's not as pretty as it should be, and it would be good
for all those groups you named to come to a better consensus as to
how this should all work.


That is a discussion for comerrers.
The question of what OpenAFS should do is a discussion for openafs-devel.

Neither discussion belongs on openafs-info.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Maximum # of users

2007-04-11 Thread Jeffrey Hutzelman



On Wednesday, April 11, 2007 11:55:18 AM -0400 Dave Botsch 
[EMAIL PROTECTED] wrote:



Hmmm... interestingly enough, the group quota for my
system:[EMAIL PROTECTED] is set at 7 (I suppose 7 is the default?)
yet somehow 23 members have been automatically created in that group.


The default is 30.  The group quota is managed in exactly the same way as 
for users; it is decremented by one each time a foreign user is created.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


RE: [OpenAFS] REMINDER: LAST DAY [AFS Kerberos Best Practices Workshop 2007: CFP Extended]

2007-04-07 Thread Jeffrey Hutzelman



On Friday, April 06, 2007 08:12:12 PM -0700 ted creedon 
[EMAIL PROTECTED] wrote:



Depends who is smart enough..
tedc

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Derrick J Brashear
Sent: Friday, April 06, 2007 4:41 PM
To: 'OpenAFS-info'
Subject: RE: [OpenAFS] REMINDER: LAST DAY [AFS  Kerberos Best Practices
Workshop 2007: CFP Extended]

On Fri, 6 Apr 2007, ted creedon wrote:


I'd like to see an Rx protocol talk.


Do 8 mailing lists all care?



No; in fact, Derrick's point is that 8 mailing lists do _not_ all care...

- openafs-announce and arla-announce don't care, and you shouldn't be
 trying to post things to those lists that are not announcements.
 And by that I mean official announcements from people authorized to
 post to those lists.

- kerberos@mit.edu and heimdal-discuss don't care; those are Kerberos
 related lists, and Kerberos doesn't use Rx.

- Given what you asked, openafs-info doesn't care either -- that list is
 for discussions relevant to AFS users and administrators; topics of
 interest only to developers and hackers belong on -devel.

- I don't think info-afs cares about _anything_ any more -- it's a fairly
 dead list, and the workshop announcements are sent there only for
 completeness.


So, that leaves a grand total of two lists that might care.  Maybe.



The point is, wide-replying to an announcement is generally considered 
impolite, because such replies typically reach far more people than care 
about your question/comment/request/whatever.  So, please don't do it.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] uw-imap tokens

2007-04-04 Thread Jeffrey Hutzelman



On Wednesday, April 04, 2007 06:07:46 PM +0100 David Howells 
[EMAIL PROTECTED] wrote:



How's the afs_pag key getting allocated?  Is it by a PAM module?


No; it gets allocated by AFS as part of the setpag operation.  Of course, 
the setpag may be being called by a PAM module, but that should be fairly 
irrelevant.


Without having looked at this in much detail, I'll hazard a guess as to 
what's going on.  I'll bet the PAG (and thus the key) are created while 
sshd is still UID 0, and thus are being charged against UID 0's quota.  If 
this is the case, I would suggest not applying keyring quotas to UID 0; if 
root wants to exhaust all the resources the machine has to offer, so be it.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] uw-imap tokens

2007-04-04 Thread Jeffrey Hutzelman



On Wednesday, April 04, 2007 08:33:34 PM +0100 David Howells 
[EMAIL PROTECTED] wrote:



That'd be my bet too.  I suspect that the PAM module (if that's what it
is) that issued setpag occurs before the pam_keyinit PAM module also.


Oh, hm.  That's not good.  We may find ourselves back in exactly the same 
situation that made it necessary to trap setgroups in the first place - it 
doesn't work to track PAG's using something whose inheritance semantics are 
different from those of PAG's.




If this is the case, I would suggest not applying keyring quotas to UID
0; if root wants to exhaust all the resources the machine has to offer,
so be it.


That's not a good solution.  The afs_pag gets attached to the root user's
default session keyring, displacing any afs_pag that was previously there.


It shouldn't get attached to the default session keyring at all, because 
that would cause the PAG to be inherited by newly-created sessions for that 
UID, wouldn't it?  That's certainly not the right thing; a PAG should be 
part of the session's actual keyring (with one being instantiated, if 
necessary), not the user's default session keyring.




What does the setpag code look like?


See http://cvs.openafs.org/src/afs/LINUX/osi_groups.c, particularly 
setpag().


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: unix owner/group of files in AFS

2007-03-30 Thread Jeffrey Hutzelman



On Friday, March 30, 2007 01:25:31 PM +0200 FB [EMAIL PROTECTED] wrote:


I'll bet you also haven't tried it with a fileserver down.


Yes. Actually, my test cell has some fileservers and one of 3 db-servers
down-by-default. The only impact is a short delay on bootup of the
afs-client until ptdbnssd marked the db-server down.

Did I mention, that the nss-plug is just a very small piece of software,
talking to a local server process (ptdbnssd) which does the real
PTDB-stuff?


You did.  I was talking about the case where you get shells or other 
information from users' home directories, and one of the fileservers 
housing user volumes is down, so you get to wait while it times out.


-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: unix owner/group of files in AFS

2007-03-29 Thread Jeffrey Hutzelman



On Thursday, March 29, 2007 09:45:47 AM +0200 FB [EMAIL PROTECTED] wrote:


Bear in mind that when you do something like 'ls', your NSS module will
be called to do an id-to-name lookup for _every file_.


ls is a bad example because it doesn't ask once per file but once per UID
(- coreutils-idcache) ;-) .


All the world is not a VAX^H^H^H^H^HLinux.
Not every ls has that optimization.



That can get real
slow if you don't cacne results or have to go out and look at a user's
home directory, open files, etc for every lookup.  It makes nss_ldap
pretty much unbearable without nscd.  Bear in mind that you cannot tell
the difference between something like ls that just wants a name, and
something that needs some other field or the whole entry.


I got your point. However - it's working fine here. We've got ~ 150 linux
PCs here using it without nscd and it was quite an improvement over
nss-ldap which we used before.


OK; so you haven't yet tried it in an environment where scalability is an 
issue.  I have at least ten times that many clients, and my site is pretty 
small.  Ask the folks at UMich or Morgan Stanley how that would work for 
them.


I'll bet you also haven't tried it with a fileserver down.

-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: chown()

2007-03-29 Thread Jeffrey Hutzelman



On Wednesday, March 28, 2007 04:14:18 PM -0700 Adam Megacz 
[EMAIL PROTECTED] wrote:




Jeffrey Hutzelman [EMAIL PROTECTED] writes:

Not true.  There are a number of subtle uses of file owners in AFS,
particularly with regard to how directories work where you have 'i'
but not 'w'.


Hrm.  Are these documented anywhere (other than the source code)?


The published documentation explains how dropboxes work from the user's 
point of view.  I don't believe there is a good description of the 
mechanisms that make it work, but basically, if you have 'i' on a 
directory, the fileserver will let you write to files in that directory 
which you own.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Problems setting up an initial AFS cell...

2007-03-29 Thread Jeffrey Hutzelman



On Thursday, March 22, 2007 01:45:29 PM -0500 Marcus Watts [EMAIL PROTECTED] 
wrote:



The current openafs cvs repository does contain the documentation
from ibm - one of the things that needs doing is to update
this documentation to reflect whatever we want people to
be doing today.  There are also some improvements that can
be made to ptserver ('pts -localauth') that would improve
the install experience (avoiding -noauth).


Or one can seed one's PRDB using pt_util.  See the afs-newcell script in 
the Debian packages for an example of this.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Streaming windows media?

2007-03-29 Thread Jeffrey Hutzelman



On Friday, March 23, 2007 10:30:42 AM -0400 Jeffrey Altman 
[EMAIL PROTECTED] wrote:



Robbie Foust wrote:

Hi,

Has anyone ever set up a Windows Media Server to point to content in AFS
using the windows client?  Just wondering how well that would work or
how reliable it would be.  I know the windows client is *much* more
stable now than it has been in the past.  Our other option is to connect
the servers to a san (which we already have).


The question is how much bandwidth do you require and how large a cache?

The 32-bit Windows client is limited to about 1GB of cache.  For a media
server you probably want a much larger cache.  The 64-bit Windows client
can support much much larger caches provided you have enough RAM on the
machine.  I've tested with a 12GB cache on a machine with 1GB of RAM.
I know there are problems at 20GB on a 1GB machine because the swapping
is too great.  However, a 60GB cache on a machine with 8GB RAM should
work quite nicely.


Actually, cache size is probably not as much an issue here as you might 
think.  What is interesting is


(1) how fast you intend to send data to clients, and
(2) how much pre-buffering you're doing as compared to the AFS chunk size.

When you read a chunk that is not cached, you will have to wait for the 
entire chunk to be received from the fileserver before you will get to do 
any processing on it.  In addition, AFS does not do any sort of automatic 
read-ahead, so cold-cache reads will be rather bursty, requiring more 
pre-buffering and/or extra effort to achieve a continuous transfer.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] inspect pid-to-pag mapping? pag-to-tokens-mapping?

2007-03-29 Thread Jeffrey Hutzelman



On Saturday, March 24, 2007 12:41:46 PM -0700 Russ Allbery 
[EMAIL PROTECTED] wrote:



Adam Megacz [EMAIL PROTECTED] writes:


Is it possible to find out what PAG a given PID belongs to (on linux,
with local root)?


grep Groups /proc/pid/status

if the PAG group still exists.


They do.  In very recent versions of OpenAFS, the PAG will be represented 
by a single group whose ID is 0x4100 plus some 24-bit number.  In older 
verisons, it's a pair of groups with funny encoding; for details, see 
src/afs/afs_osi_pag.c:afs_get_pag_from_groups().



Given a PAG, is it possible for a (root) process to find out what
tokens that PAG holds without being part of the PAG?


There isn't in the traditional interface so far as I know.  Keyrings may
offer a way.


Nope.  We use keyrings to hold PAG membership information, not tokens.

-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: unix owner/group of files in AFS

2007-03-28 Thread Jeffrey Hutzelman



On Tuesday, March 20, 2007 08:58:41 PM +0100 FB [EMAIL PROTECTED] wrote:


No. The nss-plugin actually returns this:

('frank','x',1000,65534,'frank','/afs/alpha/user/frank','/bin/bash')

Nobody here uses a shell different from Bash which is why i didn't really
cared about make the login shell non-static.


How hard would it be to fake shell info as well, say by creating
shell.zsh, shell.bash, etc PTS groups and putting a pts user in one?


Shouldn't be complicated. But maybe it's a better idea to evaluate a file
or a symlink in the user's home-volume.


Something like this intended for heavy use should

(1) cache results
(2) not touch users' home directories

Bear in mind that when you do something like 'ls', your NSS module will be 
called to do an id-to-name lookup for _every file_.  That can get real slow 
if you don't cacne results or have to go out and look at a user's home 
directory, open files, etc for every lookup.  It makes nss_ldap pretty much 
unbearable without nscd.  Bear in mind that you cannot tell the difference 
between something like ls that just wants a name, and something that needs 
some other field or the whole entry.


-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: [OpenAFS-announce] OpenAFS Security Advisory 2007-001: privilege escalation in Unix-based clients

2007-03-28 Thread Jeffrey Hutzelman



On Friday, March 23, 2007 10:04:28 AM -0400 Jeffrey Altman 
[EMAIL PROTECTED] wrote:





Kim Kimball wrote:

I'm still wondering if

a.  Removing system:anyuser from ACLs will prevent this privilege
escalation
b.  Removing system:anyuser from ACLs except system:anyuser l will
prevent the privilege escalation (i.e. the only occurrence of
system:anyuser is with l permission)

Any definitive conclusions?

Thanks!

Kim


As has been discussed on this list over the last few days, modifying the
contents of unprotected data retrieved via anonymous connections is just
one form of attack that can be executed.


What Jeff is trying to say is no.
Changing ACL's will not prevent this attack.
Changing servers will not prevent this attack.
Period.

The only way to prevent this attack is for clients not to honor suid bits 
from sources not trusted _by the client_.  Since the current AFS client has 
no way to obtain a secure connection to the fileserver with which the user 
cannot tamper, all sources must be considered untrusted.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: [OpenAFS-announce] OpenAFS Security Advisory 2007-001: privilege escalation in Unix-based clients

2007-03-28 Thread Jeffrey Hutzelman



On Wednesday, March 21, 2007 02:53:50 PM -0400 Jason Edgecombe 
[EMAIL PROTECTED] wrote:



Ok, so local access is required for OPENAFS-SA-2007-001 to be exploited?


No, but it's a lot easier.  Without local access, you not only need to 
convince the client that some file you can write to is suid; you also have 
to convince someone/something that _does_ have local access to run it.




Can a non-root user exploit it?


This is a privilege escalation on the client.  By definition, only a 
non-root user can exploit it; root users are already privileged.



-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Security Advisory 2007-001: privilege escalation in Unix-based clients

2007-03-28 Thread Jeffrey Hutzelman



On Wednesday, March 28, 2007 04:16:38 PM -0500 Christopher D. Clausen 
[EMAIL PROTECTED] wrote:



Jeffrey Hutzelman [EMAIL PROTECTED] wrote:

On Friday, March 23, 2007 10:04:28 AM -0400 Jeffrey Altman
[EMAIL PROTECTED] wrote:

Kim Kimball wrote:

I'm still wondering if

a.  Removing system:anyuser from ACLs will prevent this privilege
escalation
b.  Removing system:anyuser from ACLs except system:anyuser l will
prevent the privilege escalation (i.e. the only occurrence of
system:anyuser is with l permission)

Any definitive conclusions?


As has been discussed on this list over the last few days, modifying
the contents of unprotected data retrieved via anonymous connections
is just one form of attack that can be executed.


What Jeff is trying to say is no.
Changing ACL's will not prevent this attack.
Changing servers will not prevent this attack.
Period.

The only way to prevent this attack is for clients not to honor suid
bits from sources not trusted _by the client_.  Since the current AFS
client has no way to obtain a secure connection to the fileserver
with which the user cannot tamper, all sources must be considered
untrusted.


If there was a way to make the client only use encrypted connections
(force fs setcrypt on and ignore unencrypted connections) would that be
sufficient to prevent the privilege escalation?


No.  Even if you could do that, the connections are encrypted and 
authenticated using keys known to the user making the request.  So a user 
can spoof the response to his own (authenticated) request, indicating that 
a file is suid 0 when it really is not.


-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] jafs et al

2007-03-13 Thread Jeffrey Hutzelman



On Tuesday, March 13, 2007 08:07:42 PM -0500 Marcus Watts [EMAIL PROTECTED] 
wrote:



user vs kernel mode vs. user kernel mode


Actually, we don't really have this dimension.  No libraries are built for 
kernel-mode code; any code the kernel module requires from the rest of the 
tree is built separately and linked directly into the kernel module.  The 
situation is similar for libuafs, though the other dimensions certainly 
exist for that library as a whole.


Personally, I'd like to see a consistent set of libraries available in all 
six forms (lwp/pthread x shared/pic/nonpic).  However, note that for many 
of our libraries, building pthread versions is more complex than just 
changing a few compiler switches -- there is still a lot of code in AFS 
which assumes that context switches can only happen at very specific points.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


RE: [OpenAFS] OpenAFS Client Availability

2007-03-08 Thread Jeffrey Hutzelman



On Thursday, March 08, 2007 12:05:07 PM -0900 ted creedon 
[EMAIL PROTECTED] wrote:



This is true, but they are unset and I assume the default values are as
noted in the sources. The definitions of ip_ct_udp_timeout and
ip_ct_udp_timeout_stream are in seconds so I don't understand the
jiffies/seconds conversion requirement.


The values you set via the sysctl interface are expressed in seconds.
The values present in variables in the kernel are expressed in jiffies.
The code which handles the get and set operations for those sysctl values 
handles the conversion for you, so you don't need to worry about the 
difference unless you are looking at kernel code and assuming the same 
rules apply to it that apply to user code, which is not the case.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [security-discuss] Re: [OpenAFS] Hardware Grants from Sun

2007-02-26 Thread Jeffrey Hutzelman
On Mon, 26 Feb 2007, Nicolas Williams wrote:

 On Sun, Feb 25, 2007 at 06:47:38PM -0800, Henry B. Hotz wrote:
  On Feb 23, 2007, at 10:10 PM, Nicolas Williams wrote:
  BTW, a PAG facility that's faithful to the AFS notion of PAGs
  should be
  relatively easy to specify and implement for Solaris, but it will be
  more involved than you might have thought.  That's because we have
  proc(4), proc(1), truss(1) and ucred_get(3C) to worry about, plus
  libproc.  So we're talking about:
 
  Does it still need to be that involved if all it is is an index number?

Please move this discuission to openafs-devel, where it belongs.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [security-discuss] Re: [OpenAFS] Hardware Grants from Sun

2007-02-26 Thread Jeffrey Hutzelman



On Sunday, February 25, 2007 04:21:45 PM -0600 Nicolas Williams 
[EMAIL PROTECTED] wrote:



A while back I designed such an API, which I called the generic
credential store API (GCS-API) that provides a way to get a handle to
the current credential store for a given thread, process, session or
user, a way to associate a credential store handle with a thread,
process, session or user, a way to list the credentials references in a
store, and so on.


Note that while you can do that, it doesn't actually answer AFS's need, 
which goes beyond merely storing credentials.  We also have to be able to 
associate a PAG(*) with cached connection state and access control data, 
which is threaded through other data structures in a way we can't easily 
change for each platform.  That means it's necessary for each PAG to 
actually have a unique, long-lived, unforgeable identifier.



(*) PAG is short for Process Authentication Group.  Some people are 
apparently confused about what this means, so I thought I'd try to clarify 
up front -- a PAG is a set of processes, not a place to store credentials. 
AFS does track credentials on a per-PAG basis, but the essential thing we 
need from an OS is not a credential store; it's a way to obtain the 
identifier for the PAG to which a given process belongs.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Quota, Openafs

2007-02-26 Thread Jeffrey Hutzelman



On Monday, February 26, 2007 01:28:10 PM +0100 Alexander Al 
[EMAIL PROTECTED] wrote:



Hello,

We have here a openAFS 1.4.x system on a FC5 server and the users have a
quota of 1GB. But the trick is how do you give the users a signal that
they almost through their quota?


If you feel a need to do that, you build a tool that checks periodically 
and sends them mail or something.  A user can determine the quota and usage 
on a volume using 'fs lq'; when the quota is exceeded, attempts to write to 
that volume will fail.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Hardware Grants from Sun

2007-02-24 Thread Jeffrey Hutzelman
On Sat, 24 Feb 2007, Nicolas Williams wrote:

 I'm not sure how important it is to have per-session network
 credentials, but I do sympathize -- if nothing else it's what AFS users
 are accustomed to.  Issues surrounding how per-user network credentials
 are handled are a separate, but related concern.

If nothing else, it makes them easier to manage when a user has multiple
overlapping sessions on the same machine.  There are also plenty of people
for whom it's really important to be able to maintain multiple sets of
network credentials in one session, often simultaneously.  I use that
capability nearly every day to do things like install software without
giving the bits required to do so to my email client or web browser.  I'm
sure Jeff Altman can make plenty of arguments in this area.


  As for home directories; we've been putting users' home
  directories in AFS for O(15) years, though we only appear to have been
  supporting Solaris since 1995. If you have specific issues, please
  describe them instead of asking that Sun be willing to state a desire
  for things to work that already do.
 
  There are still issues with having to have an AFS token before any
  files in the home directory are accessed, even the .k5login. Since this
  is a general OS problem.
 
  The point is things don't work as well as they could, partly because the
  OS developers don't use AFS. This acceptance of a gift might be the
  time to get Sun to look a little closer at how things really work.

 I have no idea what gift you're talking about.  If Sun is donating
 equipment to the OpenAFS community, I think that'd be great.

So would we, but that's pretty much where it stands.  This thread started
with a proposal to apply for a grant, along with someone's guess as to
what we might be able to get.  The latter was not intended to be made
public, and I think it has confused some people into thinking the process
is rather further along than it is.

 BTW, a PAG facility that's faithful to the AFS notion of PAGs should be
 relatively easy to specify and implement for Solaris, but it will be
 more involved than you might have thought.  That's because we have
 proc(4), proc(1), truss(1) and ucred_get(3C) to worry about, plus
 libproc.  So we're talking about:


  - new getpag()/setpag() syscalls and library stubs
  - new cr*() functions
  - procfs (proc(4)) exts (say, PCSPAG operation to set a proc's PAG)
 - and libproc exts (Ppag(), Psetpag())
 - and proc(1) exts (say, a new option for pcred(1), or a new cmd?)
  - ucred_get(3C) exts (ucred_getpag(3C))
  - kinit(1) exts?
  - pam_unix_cred(5) exts? (set the caller's PAG!)
  - extensions to krb5_cc_default() and gssd(1M) to find per-session
ccaches instead of per-user ccaches

 The ARC will have to see a spec too.  It'd help to have OpenAFS folks
 helping us get a spec together and get through the ARC review.

OK; but that's a discussion for security-discuss and openafs-devel (gee,
that will be fun for the moderators); it's sort of off-topic for
openafs-info.

-- Jeff

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Hardware Grants from Sun

2007-02-23 Thread Jeffrey Hutzelman



On Friday, February 23, 2007 09:23:21 AM -0600 Douglas E. Engert 
[EMAIL PROTECTED] wrote:



So getting 100,000 in equipment is only part of it. If you are
willing to state a desire to taget OpenSolaris, Sun should be willing
to state a desire to integration of AFS credential handling
in there products too, like ssh delegation of credentials to get
AFS tokens, and having home directories in AFS.


Doug, it's worth noting that the sorts of people who can give away 
equipment often have little or no control over things like operating system 
development, and asking for such things is at best useless.  On the other 
hand, we have plenty of contacts within Sun to help us with issues like 
this, and OpenSolaris, like OpenAFS, is an open-source software project in 
which any of us can participate.


Incidentally, it should be noted that Sun's ssh supports GSS-API userauth 
and key exchange out of the box, including credential delegation, and that 
its PAM support is considerably better than that of OpenSSH.  As for home 
directories; we've been putting users' home directories in AFS for O(15) 
years, though we only appear to have been supporting Solaris since 1995. 
If you have specific issues, please describe them instead of asking that 
Sun be willing to state a desire for things to work that already do.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Hardware Grants from Sun

2007-02-23 Thread Jeffrey Hutzelman



On Friday, February 23, 2007 12:03:58 PM -0600 Douglas E. Engert 
[EMAIL PROTECTED] wrote:



So to force sshd to use a session based cache we added a
pam_krb5_cache.so.1 cache=/tmp/krb5cc_%u_%p to set the cache name.


Horray for extensibility!



Also as you must already know, I have bee bugging them to
release the Kerberos header files for Solaris 10, so one could
compile *aklog* using the Solaris Kerberos. (This is reported to be
in update 4. looks like this might be another 6 months!)
We have ben using OpenSolaris Kerberos header files with Solaris 10,
and so far it works.


There are krb5 headers in /usr/include/kerberosV5 on my snv_56 box.


As for home directories; we've been putting users' home
directories in AFS for O(15) years, though we only appear to have been
supporting Solaris since 1995. If you have specific issues, please
describe them instead of asking that Sun be willing to state a desire
for things to work that already do.


There are still issues with having to have an AFS token before any
files in the home directory are accessed, even the .k5login. Since this
is a general OS problem.


That's hardly specific to Solaris, nor really something Sun can do anything 
about, short of using a different authorization model.  My usual 
recommended answer to this problem is to be less fascist about home 
directory ACL's, but of course that's not for everyone.




The point is things don't work as well as they could, partly because the
OS developers don't use AFS. This acceptance of a gift might be the
time to get Sun to look a little closer at how things really work.


Bear in mind that at the moment, we're not talking about whether we should 
accept a grant.  We're talking about whether we should ask for one.  (In 
fact, even that isn't really a topic for openafs-info, but it's too late to 
do anything about that now).


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Hardware Grants from Sun

2007-02-23 Thread Jeffrey Hutzelman



On Friday, February 23, 2007 04:22:22 PM -0600 Douglas E. Engert 
[EMAIL PROTECTED] wrote:



Same here. Symlinks to a .Dotfile directory. Messy but works.
(My home directory has been in AFS since 1992.)
But until this general problem can be solved on *all* platforms
one can not tighten down the ACLs on the home directory. Maybe
get Sun do somehting about it on their systems. NFSv4 should
have the same problem, so maybe they will.


Exactly what solution should they apply, and why should each OS vendor do 
it unilaterally instead of the Kerberos implementors working something out?


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Possible Kernel Memory leak, OpenAFS 1.4.2+, RH3 i686/amd64

2007-02-20 Thread Jeffrey Hutzelman



On Tuesday, February 20, 2007 11:25:56 AM -0500 chas williams - CONTRACTOR 
[EMAIL PROTECTED] wrote:



In message [EMAIL PROTECTED],Kevin
Hildebrand w rites:

Eureka...  I've found the problem, there is a missing 'crfree' in
'afs_linux_lookup'.  I will submit this as a bug report.

I'd still love to know the user-land path that ends up triggering this...


it means that you have the some volume mounted (atleast) twice and the
volume (directory) information is already in the linux dentry cache.
if the afs client finds that you already have a reference to a volume
and it can't make that reference go away, it returns existing reference.
as you noticed, this path fails to call crfree().


In fact, we just discovered this on Sunday.  For those watching from the 
sidelines, Kevin's bug report (with a patch) is #54549 in RT.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: unable to login via klog

2007-02-13 Thread Jeffrey Hutzelman



On Thursday, February 08, 2007 05:33:57 PM +0530 Srikanth Bhaskar 
[EMAIL PROTECTED] wrote:



[EMAIL PROTECTED] ~]# kas -cell linafs
Password for root:
kas:interactive: Auth. as root to AuthServer failed: user doesn't exist
Proceeding w/o authentication
ka list
Password for root:
list: Auth. as root to AuthServer failed: user doesn't exist
Proceeding w/o authentication
list: caller not authorized calling KAM_ListEntry

any idea what it means when it asks for Password for root:??


kas doesn't use your existing tokens; you have to authenticate when you run 
it.  If you don't use the -admin_username argument to tell it your 
username, it makes a guess based on your local username.  So if you are 
logged in as root, it assumes 'root' is also the name of the admin account 
you want to use.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Possible complete brain failure

2007-02-13 Thread Jeffrey Hutzelman



On Thursday, February 08, 2007 08:21:12 PM -0500 Jeff Blaine 
[EMAIL PROTECTED] wrote:



Jeff Blaine wrote:

jblaine:cairo fs lq .
Volume Name   Quota  Used %Used   Partition
u.jblaine   5001855444%  9%
jblaine:cairo

So, fixed.

Looks like I have some reading up on orphans and attach/remove to do.

Thanks all


The AFS docs:

 Orphaned objects occupy space on the server partition, but
  do not count against the volume's quota.


That's someone's guess.  The reality is that the fileserver does not 
actually add up the space used by files in a volume, ever.  It keeps a 
counter in the volume header, which is adjusted as files are created and 
removed.  If something happens which causes damage that then has to be 
repaired, fsck removes a file, a file somehow becomes orphaned, etc, then 
the quota can become different from actual usage.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] afs_NewVCache errors

2007-02-02 Thread Jeffrey Hutzelman



On Friday, February 02, 2007 01:01:47 PM +0100 Jasper Moeller 
[EMAIL PROTECTED] wrote:



Hi,

we recently migrated our AFS setup to version 1.4.2. Since then, we have
spurious problems on our linux clients (the windows clients are running
fine). Specifically, after some time, users only see strange permissions
(usually just a row of question marks instead of the normal output in ls
-l)


That's what recent versions of GNU ls do when they can't stat a file, for 
example because you have no access.




Sometimes, it heals itself, usually, users have to log in and log out.
Syslog has entries like:

Feb  2 12:56:46 brummer kernel: afs_NewVCache: warning none freed, using
3000 of 3000
Feb  2 12:56:46 brummer kernel: afs_NewVCache - none freed


This means there are no free vcache entries.  Either there is a leak, or 
you have more than 3000 AFS files in use.  Try giving afsd a larger value 
for the -stat switch.



We are using linux kernel 2.6.19 from Fedora Core 6, together with
OpenAFS 1.4.2 on the clients.


You got 1.4.2 to build against 2.6.19?  That's a neat trick.  For 2.6.19 
you should need at least 1.4.3rc1 (and you really want rc2, which fixes 
some build issues and a nasty problem with clients crashing when a vlserver 
is down, but that's not out yet).


-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] refresh initial tokens

2007-02-02 Thread Jeffrey Hutzelman



On Friday, February 02, 2007 02:16:27 PM +0100 Ronny Blomme 
[EMAIL PROTECTED] wrote:



I am setting up openafs-1.4.2 client and server on FC4 with
heimdal-0.7.2. I replaced the kas-server with kdc. When I login to this
server with ssh, I get tickets/tokens (via /etc/pam.d/sshd). These
initial tokens can be refreshed once with kinit -R, but the new tickets
have no Flag=R and so these tokens cannot be refreshed:
# kinit -R
kinit: krb5_get_kdc_cred: KDC can't fulfill requested option

When I get renewable tokens with
# kinit --renewable
the Flag=R does not disapear, and I can kinit -R serveral times.


Not really an AFS question, but yes, this is how it works.
Only renewable tickets can be renewed; if you want the renewed ticket to 
itself be renewable, you will have to run 'kinit -R --renewable'.  Note 
that the KDC may choose not to allow this.


-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: obsolete volumes

2007-02-01 Thread Jeffrey Hutzelman



On Wednesday, January 31, 2007 03:44:35 PM -0800 Renata Maria Dart 
[EMAIL PROTECTED] wrote:



Hi Jeff,

Does -showsuid also imply -nowrite, or can it be used with -nowrite
to avoid taking the server out?


Yes, -showsuid also implies -nowrite.
In general, you can always use -nowrite to run the salvager in a read-only 
mode where it will only tell you what it would do, rather than actually 
doing anything.  This eliminates any need to shut down the fileserver; 
however, you only get that benefit if you run the salvager by hand - the 
'bos salvage' command only knows how to handle a limited number of salvager 
arguments.


Remember that if you run the salvager by hand and do not use one of 
-nowrite, -showmounts, or -showsuid, you must shut down the fileserver 
first.  Otherwise you risk damaging volumes.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: obsolete volumes

2007-02-01 Thread Jeffrey Hutzelman



On Thursday, February 01, 2007 01:55:08 PM -0700 Kim Kimball [EMAIL PROTECTED] 
wrote:





Jeffrey Hutzelman wrote:



On Wednesday, January 31, 2007 03:44:35 PM -0800 Renata Maria Dart
[EMAIL PROTECTED] wrote:


Hi Jeff,

Does -showsuid also imply -nowrite, or can it be used with -nowrite
to avoid taking the server out?


Yes, -showsuid also implies -nowrite.
In general, you can always use -nowrite to run the salvager in a
read-only mode where it will only tell you what it would do, rather
than actually doing anything.  This eliminates any need to shut down
the fileserver; however, you only get that benefit if you run the
salvager by hand - the 'bos salvage' command only knows how to handle
a limited number of salvager arguments.

Remember that if you run the salvager by hand and do not use one of
-nowrite, -showmounts, or -showsuid, you must shut down the fileserver
first.  Otherwise you risk damaging volumes.



Isn't this true iff salvaging one or more partitions?  Salvaging a single
volume doesn't cause 'bos salvage' to shut down the file server ...


Correct.  Salvaging a single volume does not require shutting down the 
fileserver, whether using 'bos salvage' or running the salvager directly.


-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Problems giving a daemon process permanent access to AFS

2007-02-01 Thread Jeffrey Hutzelman



On Thursday, February 01, 2007 03:57:47 PM -0500 Earl Shannon 
[EMAIL PROTECTED] wrote:



Hello,

I don't know what all your security considerations are, but I'd suggest
you create an IP ACL
in the filespace the daemon needs to access.


Don't do this.  IP-address-based ACL's are not only very insecure but also 
notoriously unreliable.




If the server doesn't have
other users on it
you should be ok.


Sorry, but this is terrible advice.  It is often quite easy for an attacker 
to hijack an IP address; assuming otherwise is asking for trouble.


-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: 1.4.2 client on RHEL5 beta 2

2007-01-31 Thread Jeffrey Hutzelman



On Tuesday, January 23, 2007 02:14:55 PM -0500 Derrick J Brashear 
[EMAIL PROTECTED] wrote:



On Tue, 23 Jan 2007, Rainer Laatsch wrote:


I circumvented the MODPOST issue by patching
/usr/src/kernels/2.6.18-1.2747.el5-i686/scripts/mod/modpost.c
around line 1103 ; replacing 'fatal' by 'warn'


We can't reasonably do that. The problem is the loose binding isn't loose
enough for this check.


No, but with the new AC_TRY_KBUILD test, we should be able to reliably 
determine at build time whether tasklist_lock is exported -- or at least, 
whether a weak reference will cause the build to fail.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: fs setacl and permissions

2007-01-31 Thread Jeffrey Hutzelman



On Sunday, January 28, 2007 01:10:11 AM +0200 Juha Jäykkä [EMAIL PROTECTED] 
wrote:



So what it really comes down to is this: I claim that, if someone who
owns a directory (i.e. has explicit a privs) defines a subdirectory
and restricts someone else to non-a privs there, it is really a
security breach for that someone else to be able to get a privs
anywhere below it.  But that's exactly what this implicit a privs for
a directory's owner provides.


Good point, but one question immediately arises: why was the other
obvious solution discarded? The other one being as follows. Suppose your
scenario with a teacher, who owns and has a at dir1 plus a bunch of
students, who own dir1/student1, dir1/student2 etc and have a in their
respective directories. Suppose teacher also wants to have a on all
subdirectories of dir1. Now, your problem can be solved by allowing
anyone with a access to dir1 to alter the ACLs on all its subdirs. This
way, if a student removes the teacher from the ACL of dir1/student1, the
teacher can always grant oneself the access again. I fail to see which
security holes this would open, although I wouldn't be surprised if it
does since the regular unix filesystems and chmod/chown do not seem to
allow this either.


You can do that, if 'dir1' is a volume roor and all of the student 
directories are part of the same volume.  A better solution is to create a 
separate volume for each student directory, and make all of those volumes 
be owned by the instructor instead of the students.


The behavior you're asking for, if I have 'a' on a directory I should be 
able to change the ACL of anything below it, is actually very hard to 
implement.  This is because directories are not containers; they are tables 
which map filenames to vnode numbers.  So, a file or subdirectory isn't 
actually _in_ a directory; it's just referred to by it.  The sort of check 
you want to do would require the fileserver to walk up the tree looking at 
the access rights on each directory above, and that's just not possible.


What we do provide is that the owner of a volume gets implicit rights on 
_every_ object in that volume, regardless of the ACL.  This is consistent 
both with the fileserver architecture and with the model that volumes 
represent the smallest unit of storage for administrative purposes.


-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] run GNU mailman from AFS?

2007-01-31 Thread Jeffrey Hutzelman



On Friday, January 26, 2007 11:18:18 AM -0600 Christopher D. Clausen 
[EMAIL PROTECTED] wrote:



Anyone have an hints on running GNU mailman (http://www.list.org/) out
of AFS?  Are any AFS specific changes required?  I attempted to search
for such info, but since the openafs lists are mailman lists this was
rather hard.


I think I'd recommend against doing that.  The question you should ask 
yourself is why you want mailman's data to live in the shared filesystem 
instead of on the server's local disk.  There are quite a few practical 
reasons why doing this is hard, not the least of which is arranging for all 
of mailman's components to run with the required credentials.


OpenAFS's lists are managed by mailman with storge on the mail server's 
disk.  The list configuration data and archives are rsync'd into AFS on a 
nightly basis; you can find the latter in 
/afs/.grand.central.org/archive/pipermail


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] How to use the diff patch

2007-01-31 Thread Jeffrey Hutzelman



On Sunday, January 28, 2007 12:43:10 PM +0100 \Jörg P.Pfannmöller\ 
[EMAIL PROTECTED] wrote:



Hello, I want to compile openafs-1.4.2-src.tar.gz on my system (Ubuntu
6.06 Kernel 2.6.15). Therefore I need to patch the source code with
openafs-1.4.2-src.diff.gz.


Since this is the openafs-info list, I'm assuming you are referring to the 
files distributed by the OpenAFS project, and not from some other source. 
The openafs-1.4.2-src.diff.gz file located in the download directory is a 
patch that turns a 1.4.1 source tree into a 1.4.2 source tree; you don't 
need it if you have downloaded the 1.4.2 source tarball.


-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: obsolete volumes

2007-01-31 Thread Jeffrey Hutzelman



On Monday, January 29, 2007 10:04:30 AM -0800 Renata Maria Dart 
[EMAIL PROTECTED] wrote:



On Mon, 29 Jan 2007, Joe Buehler wrote:


Michael Robokoff wrote:


Is there a way to list out existing volumes that
are not mounted?


The salvager has an option to list mount points:

salvager -showmounts


Hi, I expect volumes are inaccessible while this runs as during other
salvager operations?


-showmounts implies -nowrite, and so is safe to use on a running fileserver 
without taking the volume offline.  However, if you want to run this on a 
whole partition at once, you should run the salvager directly, as 'bos 
salvage' is not smart enough to know when it does not need to shut down the 
fileserver.


-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] 1.4.1 Linux client: callbacks on a directory fail to invalidate status info of files in it

2007-01-17 Thread Jeffrey Hutzelman
On Wed, 17 Jan 2007, Rainer Toebbicke wrote:

 When doing an 'rm xxx', the file server does not break callbacks for
 xxx, but only for the directory containing xxx.

Right; if the link count on the file goes to zero (the normal case), then
callbacks are not broken, because since there is no new data for clients
to fetch, there is no point.

 Now, at least in OpenAFS 1.4.1 on Linux this does not invalidate the
 cached information for xxx on another machine. Of course ls xxx*
 or something will fail since the directory is correctly re-read, but
 ls -l xxx and cat xxx still work if previously cached.

It doesn't invalidate the cached state for vnode #nnn, but it _does_
invalidate the directory contents, and the mapping from the name xxx to
that vnode number.

I have never seen the behavior you describe, but if it exists, it is
certainly a bug.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Databases AFS (revisited)

2007-01-17 Thread Jeffrey Hutzelman



On Saturday, December 23, 2006 06:14:32 PM +0100 Davor Ocelic 
[EMAIL PROTECTED] wrote:



Looking at [2], which appears to be CMU's class assignment, the
students are supposed to create a Postgres database within their
AFS volumes, without a word of problems that might create.


A bit delayed, but...

That document is over 3 years old; AFAIK it does not represent a current 
assignment for any class.  It represents one assignment for one class, 
developed by the faculty teaching that class.  It should certainly not be 
taken as CMU's position on whether putting database files in AFS is a good 
idea.


Some applications, including database servers, use byte-range locking. 
Depending on your platform, byte-range locks may be handled locally but 
turned into whole-file locks on the server, handled locally but not 
reflected on the server at all, or they may be completely ignored.  UNIX 
applications which depend on working byte-range locks will generally not 
work when the same file is used by multiple AFS client systems at the same 
time; however, many of them will work fine if all programs using the file 
are on the _same_ AFS client, or if there is only one such program at a 
time.


Even without the potential locking problems and performance penalties, 
running a database server or other long-running service backed by data 
stored in AFS (or any non-local filesystem) is fraught with peril.  Such a 
service, running on a perfectly working machine, can unexpectedly lose 
access to its data due to network problems, a fileserver outage, or even 
simple things like loss of tokens.  This is not something I would recommend 
for a production service.



However, short-term, light-duty uses like the postgres assignment you 
mentioned will probably be OK.  In these situations, the user is running 
the database server using his own tokens, the database files are not 
accessed by anything else, and the server only runs as long as the user is 
logged in (in fact, the servers mentioned in this assignment are actually 
not servers at all, but public timesharing systems -- the users have only 
ordinary unprivileged access, and the machines reboot every night).  Since 
the database does not contain any critical data, a network or fileserver 
outage creates an inconvenience but no serious data loss.



-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS-devel] Re: [OpenAFS] Solaris 10 11/06 afs 1.4.2 pam module panic.

2006-12-20 Thread Jeffrey Hutzelman



On Tuesday, December 19, 2006 09:11:44 PM -0500 Dale Ghent [EMAIL PROTECTED] 
wrote:



Okay, I looked into this more and a kind soul at Sun pointed me to  the
new (as of Solaris 10) ddi_cred(9F) man page.

This page details public (yet evolving) interfaces to the otherwise
private cred_t struct. These interfaces seem to have been implemented
for the NFSv4 functionality in Solaris 10.


It should be noted, for those who weren't copied on the private mail and 
don't read the OpenSolaris networking-discuss list, that evolving in this 
case really means the same as committed, which basically means we can 
count on that interface not to change except under very unusual 
circumstances...


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] How to replicate files on different machines

2006-12-19 Thread Jeffrey Hutzelman



On Tuesday, December 19, 2006 05:12:43 PM +0530 
[EMAIL PROTECTED] wrote:



I'm trying to use 'kinit' and 'aklog' to get admin tokens for accessing
the cell under /afs on my client machine. Though these are installed on
my machine, I'm not able to configure these, since I'm not able to find
the syntax for using 'aklog' in 1.4.2 documentation. As we use 'kas' tool
to create Authentication Database entries, which are later accessed by
'klog' command, is there any similar way to create entries for 'aklog'
and 'kinit'?


If you're setting up a new cell, don't use the kaserver; it's deprecated, 
and for good reason.  Set up a real Kerberos realm instead, and then use 
'kinit' to get Kerberos tickets followed by 'aklog' to get AFS tokens.


If you have an existing cell which uses the kaserver, then 'klog' is the 
correct command.  However, you still will not be able to see into a 
newly-created volume unless you obtain tokens for a user that exists in the 
PTS database and is a member of the system:administrators group.  Of 
course, if you have an existing cell, then you should already have at least 
one such user, and you should also have an existing root.cell volume which 
has a more permissive ACL.


-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Solaris 10 11/06 afs 1.4.2 pam module panic.

2006-12-19 Thread Jeffrey Hutzelman



On Tuesday, December 19, 2006 03:52:39 PM -0800 Carson Gaspar 
[EMAIL PROTECTED] wrote:



  meem wrote:
  Is there a reason they're not using crsetugid() (see ddi_cred(9F)) to
 do this? Seems like if they had, everything would've worked fine.


Well, that interface did not exist prior to Solaris 10, and AFS is quite a 
bit older than that.  We're not using it because so far, nothing has broken 
which has caused us to take notice of its existence.  Now that we know 
about it, we can use it, though of course only on AFS_SUN510_ENV.


Further discussion of the details of this work belongs on openafs-devel.

-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] How to replicate files on different machines

2006-12-18 Thread Jeffrey Hutzelman



On Friday, December 15, 2006 11:56:07 AM +0530 
[EMAIL PROTECTED] wrote:



I'm using OpenAFS 1.4.2 on Fedora 5.
I want to replicate file(s) on 2 machines (both Fedora 5).
How could this be achieved?
Do I need to install OpenAFS server on both the machines, and if this is
the requirement, how could the servers be synchronized?


Replication applies to whole volumes, not individual files, and requires an 
explicit release operation to cause changes to the read/write volume to 
be propagated to the read-only replicas.  AFS does not provide replication 
of read/write data.





Write now I'm facing one other issue.
I have installed server on 1st machine and client on 2nd machine (both
Fedora 5). I have given the cell information for the server on 2nd
machine in /usr/vice/etc/CellServDB, CellServDB.dist and ThisCell.

However, when I start the client, the cell under /afs/ is not displayed
as a directory.

# ls -l /afs/
total 0
?- 0 root root 0 Jan  1  1970 ps2750.pspl.co.in


That is what the output from recent versions of 'ls' looks like when you 
don't have permission to access the file in question.  Most likely that is 
indeed a directory (actually, an AFS mount point), but since you have just 
set up a new cell, its contents are visible only to AFS administrators, and 
you don't have AFS admin tokens.  You will need to acquire tokens using 
tools like 'kinit' and 'aklog' before you can access that directory.


-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Undelete support feedback request

2006-12-11 Thread Jeffrey Hutzelman



On Friday, December 08, 2006 01:09:04 PM -0600 Christopher D. Clausen 
[EMAIL PROTECTED] wrote:



Jason Edgecombe [EMAIL PROTECTED] wrote:

Being able to have snapshots of a volume or multiple backups of a
volume from different times.

I think the simplest approach would be to clone a volume and give a
new but predictable name like vol.backup1 or vol.b20061207


Can't this be done right now with vos copy to the vol.b20061207 name?

Or am I missing something obvious?


'vos copy' does not produce a copy-on-write clone
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Undelete support feedback request

2006-12-07 Thread Jeffrey Hutzelman



On Thursday, December 07, 2006 01:34:50 PM -0500 Jeffrey Altman 
[EMAIL PROTECTED] wrote:



I would believe that what would be desired is a non-permanent delete
in the r/w volume in which the file/directory would be marked with
a new attribute that means deleted but not reclaimed.

Files/directories would be automatically reclaimed as they bump up
against their quota.

New RPCs would be required to support the undelete operations:

 * purge all deleted but unclaimed files/dirs

 * undelete the specified file/dir

 * list files/dirs that can be undeleted


Actually, we're only talking about files here.  A directory can't be 
deleted in the first place unless it's empty, and the undelete operation 
for an empty directory is the same as the directory creation operation.


Life gets interesting when multiple files with the same name have been 
deleted, but maybe you don't care about that (I would).


The mechanism for representing this would not be trivial to design, given 
requirements like preserving backward-compatibility of the directory format 
and not allowing the vnode index to grow without bound.


Inventing a new volume type would also be problematic, in terms of tracking 
the relationship between the new volume and the RW, handling mount points 
correctly, and also because Matt is proposing a new type of clone with 
different and much more complex semantics.



I suspect that the improvement over traditional backup volumes is 
relatively small, and while it would be a cool feature, I think there are 
probably others on which the time would be better spent.  Buy hey, it's 
your time, not mine...


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Undelete support feedback request

2006-12-07 Thread Jeffrey Hutzelman



On Thursday, December 07, 2006 02:21:00 PM -0500 Jeffrey Altman 
[EMAIL PROTECTED] wrote:



Actually, we're only talking about files here.  A directory can't be
deleted in the first place unless it's empty, and the undelete operation
for an empty directory is the same as the directory creation operation.


A delete operation on a directory filled with files that have been
deleted but not yet reclaimed needs to be marked with the new attribute.
Otherwise, you lose the ability to undelete the files stored within it.


Good point.




Life gets interesting when multiple files with the same name have been
deleted, but maybe you don't care about that (I would).


Not so interesting.  The function to list the entries reports multiple
files with the same name.


... and how do you pick which one you're undeleting?
I mean, I know how to do this at the RPC layer - you just undelete by FID. 
But what is the UI going to look like?





I suspect that the improvement over traditional backup volumes is
relatively small, and while it would be a cool feature, I think there
are probably others on which the time would be better spent.  Buy hey,
it's your time, not mine...


I completely agree that there are many more important things for time
to be spent on that are causing users real problems and not just
inconveniences.


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Undelete support feedback request

2006-12-07 Thread Jeffrey Hutzelman



On Thursday, December 07, 2006 02:23:07 PM -0500 Jeffrey Altman 
[EMAIL PROTECTED] wrote:



Jim Rees wrote:

Isn't undelete an application function?  I don't think it belongs in the
file system.  Are there any other file systems that implement it?


Microsoft Windows as an operating system implements it.  The file is
moved on disk to a system directory which indexes the names and
handles the auto-reclaim when space is required.


That works well for local filesystems, and presumably for CIFS.
With AFS, Windows can't possibly guess where to put the undelete directory.

-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Undelete support feedback request

2006-12-07 Thread Jeffrey Hutzelman



On Thursday, December 07, 2006 05:38:07 PM -0500 Marcus Watts 
[EMAIL PROTECTED] wrote:



Sidney Cammeresi [EMAIL PROTECTED] posted the VMS way.

Not that I'm advocating this is the right way (let alone
have code that implements this), but here's how the same
things could look in Unix:

$ ls -F
foo.txt
$ ls foo.txt/*
foo.txt/1  foo.txt/2  foo.txt/3  foo.txt/4  foo.txt/5


That's not really tenable.  Some operating systems do have objects that 
look like files from some angles and directories from others, but others 
have VFS layers that don't really allow for this.  I very much doubt that 
the linux dentry cache can handle an object which claims to be a file but 
has children, and even if it did, you'd have an interesting time dealing 
with files which have multiple links, since directories may not have 
multiple aliases in the dentry cache.


Now, replace 'ls foo.txt/*' with 'fs listdeleted foo.txt', and you're fine.



At least there's no upper-case this way.


There's nothing inherently upper-case about VMS.  It has a case-insensitive 
filesystem, and defaults to listing filenames in upper case.  You'd see 
lots of upper-case in your example, too, if the file were named FOO.TXT.


-- Jeff
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


  1   2   3   4   5   6   >