Re: [OpenAFS] Question for admins regarding pts membership output
On 7/14/22 5:49 AM, Dirk Heinrichs wrote: Ed Rude: I think I prefer the new behavior you are suggesting as the default. I'd prefer to have the current behavior as default, as to not break current scripts. Admins can then decide to enhance their scripts as needed instead of being forced to change them because they got broken. On the other hand, I'd prefer a diminishing number of broken scripts vs. a future of less than ideal defaults, especially if some warning is issued ahead of the change. Backwards compatibility has it's place: in the past, mostly. -- +---+ / Todd Lewis, Middleware Services,uto...@email.unc.edu / / "We is confronted with insurmountable opportunities." / / - Walt Kelly, "Pogo" / +---+
Re: [OpenAFS] Does vos release/volume replication always initiates data transfer from RW site?
Given the two-site scenario below, and successful manual replication as outlined by Jeffrey further below, you then have two RO volumes at site B as desired. If "vos release" is enough of a problem to warrant this manual intervention, then won't subsequent releases from the RW site A now be twice as consuming, and therefore justify (?) removal of the 2nd replica prior to a release, then the release, followed by a repeat of the dump/restore/addsite process to recreate the 2nd replica? This can be scripted, but it's a balance between the extra work for the admin vs for the machines/network. -- Todd Lewis On 08/06/2018 08:28 AM, Jeffrey Altman wrote: > On 8/5/2018 11:58 PM, Ximeng Guan wrote: >> Hello, >> >> We have one cell covering two sites. The WAN bandwidth between the two >> sites is relatively low, so we use volume replication to speed up the >> access. >> >> Those replicated volumes are often large in size. So replication to the >> remote site is an operation whose cost cannot be neglected. >> >> Now with RW volumes at site A and their RO replication on servers at >> site B, we want to bring up a new file server at site B to balance the >> load. In other words we would like to “offload” a majority of the RO >> volumes from one server to a different server at Site B, without >> touching their RW masters at Site A. >> >> [...]> >> I wonder if there is a way to directly transfer those RO volumes btw >> servers at site B, without breaking the data integrity among the RO >> sites or affecting the atomicity of “vos release”. > AuriStorFS supports the desired functionality including the ability to > copy and move readonly sites between file servers or vice partitions > attached to the same file server. > > https://www.auristor.com/openafs/migrate-to-auristor/ > > OpenAFS does not contain explicit functionality but it is possible using > > vos dump > vos restore -id -readonly > vos addsite -valid > > to achieve similar results. From the source server use "vos dump" to > generate a dump stream of the readonly volume you wish to replicate. > Pipe the output to "vos restore" specifying the destination server, > partition, the readonly volume id and the -readonly flag to specify the > volume type. Finally, use "vos addsite" with the -valid flag to update > the location service entry for the volume. The -valid flag indicates > that the readonly volume data is known to be present and consistent with > other sites. Note that the -valid switch will not mark a site as "new" > if a "vos release" failed to update one or more sites. > > Be careful to use publicly visible addresses when executing these commands. > > Jeffrey Altman ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] convert 'vos dump' output to tar or zip?
Thanks, Jeffrey. In fact we are in the process (arguably have been for at least a decade, but there seems to be actual momentum this time) of shuttering our AFS service. The particular use case I have in mind is a web page users can hit (a) to retrieve a copy of their AFS home volumes as either a tar balls or zip files, and (b) to confirm that they no longer need said volumes so that we can begin the deprovisioning process on them. My hope is that by making this part self-service we can focus our limited staff resources on dealing with departmental and other organizations' volumes. Most of our home volumes are relatively small by today's standards. There are ways to traverse a volume without dropping through mount points ('up' can do it, and I've probably got some perl code that does it too), feeding the list of found files to tar and/or zip, and that's probably what I'll end up using. But if somebody had a blob of vos-dump-handling code lying around I'd feel silly not to have asked. I understand that we'd be losing mount points, ACLs, etc. but for this case that's alright. Fortunately, we'll probably avoid the need to delve into a pile of vos dump backups. On a personal note, I'll really miss AFS. It's one of my personal top three technologies I've gotten to work with in my 34+ years here. The way data governance and service delivery are managed by location in a long-lived file system instead of tied directly to fleets of always aging application hosts is something we never clearly communicated to management. I can't tell you how many hours I've spent in meetings discussing problems that simply would not have existed had AFS been in play. The community has been extremely helpful and professional, and I'm grateful for the opportunities I've had due to their efforts. For years -- decades actually -- we've joked about C-level executives who've declared AFS dead yet didn't last as long here as AFS. Lately I've been saying that when I retire I'm going to take AFS with me. Maybe I'll stand up a cell at home on some Raspberry Pies and make good on that. Alas, life, unlike this email, is short... On 01/30/2018 05:07 PM, Jeffrey Altman wrote: Hi Todd, Its been a while ... On 1/30/2018 4:20 PM, Todd Lewis wrote: Has anybody a tool to convert "vos dump" output to a tar or zip format? To the best of my knowledge there is not such a tool. Part of the reason a tool doesn't exist is that there would not be a lossless conversion from dump to either tar or zip. Assuming that the dump contains the full contents of a volume (not an incremental) it would be possible to transfer the directory tree, file data, and symlinks. However, mount points and AFS specific metadata including ACLs, policy bits, etc. would be lost. Is the "vos dump" format usable by anything other than "vos restore"? Yes. dumpscan and restorevol can process dump files. Teradactyl's TiBS backup system can I believe import AFS dumps and restore to non-AFS file systems. There have also been several uncompleted projects that permit dumpfiles to be mounted as if they were ISOs on Linux and Windows. Perhaps it would be useful if you could discuss the end goal of the conversion. For example, you might describe the problem as: UNC is shutting down its cell and we have 20+ years of volume dumps as backups. We would like to be able to access the contents of these backups without deploying a new cell. Sincerely, Jeffrey Altman ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
[OpenAFS] convert 'vos dump' output to tar or zip?
Has anybody a tool to convert "vos dump" output to a tar or zip format? Is the "vos dump" format usable by anything other than "vos restore"? -- +--+ / todd_le...@unc.edu 919-445-0091 http://www.unc.edu/~utoddl / / "I've had a perfectly wonderful evening. / / But this wasn't it." - Groucho Marx / +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Re: IRIX support/documentation
On 11/03/2014 10:01 PM, Andrew Deason wrote: [...] Anything mentioning AIX could probably be moved around to a less prominent place, if that helps, but maybe not actually removed. We're still running our servers on AIX. ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
[OpenAFS] tool to facilitate ACL cleanup
I'm looking for a tool or procedure that will look at ACLs in a directory tree in AFS and suggest (possibly new) groups, memberships, and permissions to help straighten out the mess that has grown there over the years. We have an aging cell, and as projects have come and gone, we've accumulated some rather ad hoc ACLs, some using groups, some not, some with users who are no longer around... I try to discourage putting individual users in ACLs in project group space, preferring instead use groups in ACLs and put users in groups. But many times individuals were added directly to one or more directory ACLs because it was easier at the time. Some of these have become all but unmanageable. If anyone has suggestions for strategies or specific tools to facilitate cleaning up small to medium sized nightmare forests of ACLs, I'd love to hear about them. -- +--+ / todd_le...@unc.edu 919-445-0091 http://www.unc.edu/~utoddl / /Acupuncture is a jab well done. / +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
[OpenAFS] File locking
We have a commercial application we've been running for years on an openafs-1.4.14 client on a RedHat Linux box. This week we upgraded the client to 1.6.9, but quickly had to revert. The difference has to do with file locking. The section of strace output below shows the behavior in the old client: open(/afs/[OurCell]/pkg/[application_path]/msgdir.msg, O_RDONLY) = 8 fcntl(8, F_SETFD, FD_CLOEXEC) = 0 fcntl(8, F_GETLK, {type=F_UNLCK, whence=SEEK_SET, start=0, len=0, pid=0}) = 0 fcntl(8, F_SETLK, {type=F_RDLCK, whence=SEEK_SET, start=0, len=0}) = 0 after which the application proceeds with total succss. The corresponding section of the 1.6.9 client's strace of the same application on the same machine shows: open(/afs/[OurCell]/pkg/[application_path]/msgdir.msg, O_RDONLY) = 8 fcntl(8, F_SETFD, FD_CLOEXEC) = 0 fcntl(8, F_GETLK, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0, pid=0}) = 9 close(8) = 0 write(1, \n, 1) = 1 write(1, UNABLE TO OPEN/READ MESSAGE FILE..., 53) = 53 After the somewhat misleading message, the application closes. The difference starts in the 3rd line, where the old client reports no extant locks on the file, but the 1.6.9 client reports an existing exclusive lock. For what it's worth, other 1.6.9 clients exhibit the same behavior, so it's not like this one client had an outstanding lock of the file. Was there some change in file locking semantics that would make sense of this? Does this application tickle a corner case error in openafs's file locking, or does more rigorous lock handling in the newer client expose a bug in the application? I have tried to replicate the failure with a very simple C program with no success, but that mostly indicates I'm not sure what I'm testing for. Thoughts? Suggestions? Thanks, -- todd_le...@unc.edu ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Re: File locking
Andrew, The suggested patch was applied, installed, and our application now works correctly again with the new client. Thank you very much for your timely response. Looking forward to this patch making its way to release soon. -- todd_le...@unc.edu On 07/17/2014 11:59 AM, Andrew Deason sent: On Thu, 17 Jul 2014 10:07:07 -0400 Todd Lewis todd_le...@unc.edu wrote: Was there some change in file locking semantics that would make sense of this? Does this application tickle a corner case error in openafs's file locking, or does more rigorous lock handling in the newer client expose a bug in the application? I have tried to replicate the failure with a very simple C program with no success, but that mostly indicates I'm not sure what I'm testing for. I assume the path you gave was for an RO volume, yes? In that case, no, it's just a mistake; specifically, my mistake (sorry). It's trying to return the error EBADF (9) to you, which is wrong, and it's returning that code incorrectly, which makes this a bit more confusing. There was some code added a while back to skip some processing for locks on RO volumes. But for some reason I added some linux-specific check much earlier before the platform-agnostic check, which makes this fail for F_GETLK requests in addition to F_SETLK ones. (For F_SETLK, we should indeed fail with EBADF here.) I'm not sure why I did that, since this should have been handled by the platform-agnostic code; none of my notes or comments about this seem to say anything about it. And in that Linux-specific check, the error is just plain returned incorrectly (which just happens from time to time, since Linux has a very specific different way of doing this). I think I just didn't notice this when running it originally (and almost missed it just now), because I saw a '9' get returned and assumed that's what was supposed to happen. But it's not, of course; fcntl is supposed to return -1 and set errno to 9. A proposed fix is here (gerrit 11316): http://git.openafs.org/?p=openafs.git;a=commitdiff_plain;h=93451d74474bcd8ed9e0f59cf690d3d61c3022f9 If you are able to test code changes in your client, give that a try. -- +--+ / todd_le...@unc.edu 919-445-0091 http://www.unc.edu/~utoddl / /A picture is worth a thousand words, or in the case of/ / modern art, the same word repeated a thousand times. / +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Re: Moving Magic Trio to another domain
On 10/02/2013 01:42 PM, Jukka Tuominen sent: servers, and a libnss-afs package to pass on (afs?) metadata (+ other Surprised you got libnss-afs to build against 1.6.*. The openafs headers are missing some key stuff. nsswitch.conf BTW passwd: afs files group: afs files afspag shadow: files Probably not the problem, or even related to the problem, but I'd put files before afs in nsswitch.conf Also, are you running nscd on both hosts? libnss-afs is not thread safe otherwise. -- +--+ / todd_le...@unc.edu 919-445-0091 http://www.unc.edu/~utoddl / / He has no enemies, but is intensely disliked/ /by his friends. - Oscar Wilde/ +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Re: Naming of backup and up commands
Andrew Deason adea...@sinenomine.net writes: Either way, sure, makes sense to me. But the people that actually use those commands really do need to say something, even if it's just yes, sounds good. We use 'up' in one of our administrative scripts. Dealing with a rename would be trivial to the point of entertaining. A one-time inconvenience for long-term overall improvement is not a problem for us. (I.e. don't use 'up' in our Entry, Descent, and Landing sequence.) ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Re: Moving Magic Trio to another domain
On 09/24/2013 03:23 PM, Andrew Deason sent: If you want to copy the data from a 'source' cell to a 'destination' cell and you can have both available at the same time, you can use the 'up' tool to copy the directory tree while preserving all of the afs-specific information and avoiding endless loops. 'up' is only partially helpful here. You can either have 'up' stop copying at mountpoints, so you're only copying the part of the tree that's in an individual volume, or you can have it copy the entire tree below some point, in which case mountpoints in the source become regular directories in the target, and everything gets pulled up into the target volume. I rather imagine that's where the name up comes from, but I don't really know. It's saving grace of course it that it copies ACLs as it goes. I don't think 'up' will detect or avoid endless loops, but it's been a while since I've been in that part of the code. -- +--+ / todd_le...@unc.edu 919-445-0091 http://www.unc.edu/~utoddl / / With her marriage she got a new name and a dress. / +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Re: Major synchronization error
On 11/13/2012 11:53 AM, Jack Hill sent: Please ignore this; is was user error. I can create users if I specify there id. I cannot let the ptserver choose it since our current maxid is 2147483647. Back before Linux could handle uids 64k, we recycled pts ids. We had a simple Perl daemon that would look at the allocated pts ids on startup, then in response to requests would hand back numbers in order corresponding to the holes between some arbitrary low (I think 1000) and 64k-2. This allowed us to keep our uids below 64k until Linux grew up. You're welcome to that code if you want it. Second point: I'm pretty sure that the pts maxid is just a number that would be better called nextid. It isn't necessarily the highest id number in your pt db. I think we ran into this at one point, though you are in a better position to try it than I am as your maxid is basically useless at this point. Anyway, I believe you can set your maxid (pts setmax ###) to some lower number, and createuser will attempt to use unused numbers starting at that point going upward. Try setting it to, say, your pts id, then attempt to create a dummy pts entry. You may be pleasantly surprised. Or you may find I'm wrong. Either way, let us know what you find! Good luck, -- +--+ / todd_le...@unc.edu 919-445-0091 http://www.unc.edu/~utoddl / /They never open their mouths without subtracting from/ / the sum of human knowledge - Thomas Brackett Reed / +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Question about directory names
If you do fs lsm /afs/.Cell-Name /afs/Cell-Name you'll see something like '/afs/.Cell-Name' is a mount point for volume '%root.cell' '/afs/Cell-Name' is a mount point for volume '#root.cell' The difference is '%root.cell' will hit the read-write copy of root.cell, while '#root.cell' will hit a read-only copy if one is available. Once you hit a read-write, you'll get read-write volumes from there on down. It looks as though your cell's root.cell volume has a couple of entries that are not in the read-only replicas yet. Once somebody does a vos release root.cell you will see the other entries in the normal path. In short, it's an easy way to see the all-writable view of your cell's volumes. On 11/29/2011 01:44 PM, Hunter McMillen sent: Hi everyone, This is probably a real beginner question but why do some cellnames begin with a period? I couldn't find anything about this in the docs, or perhaps I missed it but I was hoping someone could clear up my confusion. example: There are two entries the cell name, one with a '.' in front of it and one without. ls /afs/Cell-Name ls /afs/.Cell-Name The cell without the '.' has folders named: service and users inside it, The cell with the '.' has folders named: service, users, bin, and share. Thanks, Hunter McMillen -- +--+ / todd_le...@unc.edu 919-445-9302 http://www.unc.edu/~utoddl / / A plateau is a high form of flattery./ +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] OpenAFS building for RHEL6
What is your desired end product, and what instructions are you following? A functioning afs client. The README that was in the source bundle. The README really just provides information on how to build the source package. Information on how to go from a built source tree to a running client is contained in the Quick Start Guide, which is available from docs.openafs.org Those two sentences would be a great addition to the src tarball README. However, if you just want a running client, rather than a development tree, I would strongly encourage you to use the SRPM. The packaging in the SRPM is more complete and better tested than what you get from a make install. In particular, it will install and configure a proper RC script so that the client is correctly started and stopped. There should be an SRPM available for 1.6.0pre4 from the OpenAFS website, you can build this by simply using the 'rpmbuild' tool. Should be, but alas, http://dl.openafs.org/dl/openafs/candidate/1.6.0pre4 shows none. In fact, finding 1.6.0pre4 is quite a trick. I'm not sure you can do it starting from http://openafs.org/ without manual URL tweaking. Maybe I'm click deficient, but I still haven't found the rpm to configure yum. Finally did it by hand. -- +--+ / todd_le...@unc.edu 919-445-9302 http://www.unc.edu/~utoddl / / He often broke into song because he couldn't find the key. / +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] OpenAFS building for RHEL6
On 04/05/2011 10:37 AM, Todd Lewis sent: Maybe I'm click deficient, but I still haven't found the rpm to configure yum. Finally did it by hand. Just found it. D'oh! (It's called openafs-repository in http://dl.openafs.org/dl/openafs/1.4.14/.) -- +--+ / todd_le...@unc.edu 919-445-9302 http://www.unc.edu/~utoddl / / If you don't pay your exorcist you get repossessed. / +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] OpenAFS building for RHEL6
On 04/05/2011 10:48 AM, Simon Wilkinson sent: On 5 Apr 2011, at 15:37, Todd Lewis wrote: In fact, finding 1.6.0pre4 is quite a trick. I'm not sure you can do it starting from http://openafs.org/ without manual URL tweaking. It's the first item in Recent OpenAFS News on the right hand frame on that page. So it is. For some strange reason I kept fumbling around in the Downloads section on the left side. -- +--+ / todd_le...@unc.edu 919-445-9302 http://www.unc.edu/~utoddl / / In democracy it's your vote that counts; / / In feudalism it's your count that votes. / +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Re: Supergroups and ACL inheritance
On 02/25/2011 05:54 PM, Andrew Deason wrote: A more accurate way to think about it is to realize that the root directory of a volume does not have a parent directory in the usual sense. Since you can create many mountpoints to the same volume from anywhere in /afs (and even from different cells), the alleged parent directory may be different, depending on which mountpoint you are arriving from. Also realize that the root directory of a volume (and its ACL) is created when you create the volume, not when you create the mountpoint. So, at that time, it certainly has no 'parent directory' to get an ACL from. For completeness, this seems a good time to mention that the owner of a volume's root directory has an implicit administrator (a) ACL on all directories in the volume. This is largely so that users can dig their way back out of inadvertently dug ACL black holes without having to track down an otherwise happy system:administrators member. Also, the owner of a file has implicit read (r) and write (w) rights on a file if that user has insert (i) rights on its parent directory. (from http://wiki.openafs.org/AFSLore/UsageFAQ/#2.21%20What%20meaning%20do%20the%20owner%2c) ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Re: Supergroups and ACL inheritance
On 02/25/2011 12:11 AM, Andrew Deason wrote: On Thu, 24 Feb 2011 18:01:10 -0700 Thomas Smiththeitsm...@gmail.com wrote: Groups: * group0 - The primary group for office location group0. * group0:admins - Office administrations for group0. One more point of confusion to consider: Your group memberships are checked when you get your AFS token. If you're making changes to group memberships, I believe you won't pick up the changes until you get new tokens. ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Re: Supergroups and ACL inheritance
On 02/25/2011 12:11 AM, Andrew Deason wrote: On Thu, 24 Feb 2011 18:01:10 -0700 Thomas Smiththeitsm...@gmail.com wrote: Groups: * group0 - The primary group for office location group0. * group0:admins - Office administrations for group0. One more point of confusion to consider: Your group memberships are checked when you get your AFS token. If you're making changes to group memberships, I believe you won't pick up the changes until you get new tokens. -- todd_le...@unc.edu ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Package Management in AFS
On 12/20/2010 12:46 PM, Dirk Heinrichs sent: Hi, I'm currently thinking about a good way to deploy software packages in (eventually replicated) AFS volumes. Probably not what you're looking for, but we have developed a tool in Perl to help with the AFS-specific bits of building and deploying traditional untar/configure/build/install packages into replicated AFS volumes across multiple architectures. We use it to maintain a couple of hundred pieces of software from source. Invocation with no parameters shows this: $ pkg Usage: pkg create|delete|help|link|sys|split|unsplit|replicate|unreplicate [pkgname-ver] where: create create a package delete delete a package helpprint this message linklink to package source sys setup @sys dirs/links split split package volume unsplit reverse of split replicate replicate a package unreplicate unreplicate a package The idea here is that each package has a src tree, build trees linked back to the source, and install trees for each architecture we support. It also sets up appropriate PTS groups for maintainers of each package. It undoubtedly has a few hard-coded bits specific to our site (build machines for various architectures, file servers, etc.), but as it's one file they shouldn't be hard to find/fix. If you want more detail, drop me a line or read the source; it's here: /afs/isis.unc.edu/pkg/admin/bin/pkg Cheers, -- +--+ / todd_le...@unc.edu 919-445-9302 http://www.unc.edu/~utoddl / / Does the name Pavlov ring a bell? / +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] GiveUpAllCallBacks callers
That was my thought too, but I'd hard-code a sunset on using the technique. Pick a date beyond which we just skip the test so clients quit doing it, and that wart can be removed from the source any time thereafter. On 12/16/2010 11:51 AM, Steve Simmons sent: Having read the entire discussion up to this point, I find the alternate version 3404 to be an acceptable workaround. There are lots of things in various bits of software which look around for condition X to see if Y is acceptable. Lacking a proper solution (more on that in a separate note), this seems quite reliable. Derrick notes it's dicey software release and engineering; yeah, I agree. So let's comment the crap out of it in the code. Summary: go for it. Steve ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info -- +--+ / todd_le...@unc.edu 919-445-9302 http://www.unc.edu/~utoddl / /Those who jump off a Paris bridge are in Seine. / +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] GiveUpAllCallBacks callers
On 12/13/2010 02:52 PM, Derrick Brashear wrote: An alternate version of 3404, patchset one, includes an implementation which cheats, namely, within a few months of GiveUpAllCallBacks being fixed, the GetStatistics64 RPC was introduced. Its existance would be used as a sentinel that GiveUpAllCallBacks was safe to issue. However, this was objected to as it ties unrelated RPCs together. Indeed it does, but it's also a workable pragmatic solution to a real problem. I could see putting a sunset date on the technique. You might also log a warning based on it so vulnerable admins could get a clue. This seems like a great way to move forward to me in spite of the unrelated RPC tying inelegance. -- Todd Lewis ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] maximum number of files per directory??
On 08/11/2010 05:39 AM, Simon Wilkinson sent: But that really depends on your real world - we have a number of directories here that have 60,000 or so objects in them. In fact, to hit a limit as low as 16,000 entires, every filename would need to take up 4 slots, which means every name would have to be more than 80 characters long. I once worked on a project that had ~800,000 files with names between 120 and 199 characters each. (Not in AFS, and not in the same directory!) The first 8 characters of the files' names were held in common between large groups of files, and performance was horrific. It wasn't until after the project was over that we discovered that the hierarchical storage system we were using employed a hash table to manage file names. It didn't hash the entire file name though; to improve performance of the hashing operation itself, it only hashed the first 6 characters. That gave us about 200 hash buckets for 800,000 files. Not that you should care about any of this. It's just to illustrate that knowing (or not knowing in our case) the characteristics of your storage system may have huge impacts on how effectively you use it. Good luck with that. -- +--+ / todd_le...@unc.edu 919-445-9302 http://www.unc.edu/~utoddl / / I have never killed a man, but I have read many / / obituaries with great pleasure. - Clarence Darrow / +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] user home directory replication
On 07/14/2010 01:43 PM, Jonathan Nilsson wrote: I would like to replicate home directories (and other AFS volumes that are primarily accessed read/write) for the purpose of faster disaster recovery in certain common cases, such as local hardware failure on an AFS File Server. Others have offered good advice based on their years of experience that should be helpful in making this work. By this I mean attempting to take advantage of volume replication in a disaster recovery strategy as a proxy for high availability. My personal opinion here: Don't do this. At least not until you have more experience with normal AFS. AFS doesn't provide high availability, but you can approach it by using more robust hardware. Whether such a disaster recovery scheme is feasible depends largely on the hardware resources available to you vs. the number and size of home and other volumes you intend to support this way. However, very few sites actually do this, which should tell you something... In general, confusing these three different things -- volume replication, disaster recovery, and high availability -- combined with insufficient experience is likely to _increase_ your need for the latter two. Everything you've posted so far seems to indicate you take your system administration seriously, so I've no reason to doubt your abilities or the effort you are willing to put into making this work. However, AFS was not designed to work this way. I think your efforts would be better spent in putting up a normal AFS cell and building your experience before attempting to bend it this way. However you eventually decide to go, you have my full support (for what that's worth), but I think we (the OpenAFS community) would be remiss for not at least showing the other side of this coin. I'll go away now. Good luck, -- Todd Lewis uto...@email.unc.edu ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] hard link behavior
On 07/07/2010 12:13 PM, Jaap Winius sent: That's interesting for sure, but it sounds as though you're not familiar with disk-based backups, because that's no substitute for what I have in mind. Sounds like you're thinking about the hardlink-fest that is the heart of BackupPC. You're right: OpenAFS isn't a good place to put such storage. All I'm suggesting is that a specific switch be added so that one limitation can be traded for another and new things can become possible. [...] Certainly a good idea -- that's something that I've wondered about before -- but I see no reason why both of our ideas should not be made possible. Per-file ACLs would make hardlinks within a volume much simpler, trivial even. In fact, it's probably a prerequisite for any sane implementation. Not seeing the reason something is hard doesn't make it not hard. This touches many layers and breaks quiet assumptions in lots of code and probably on the wire too. I'd personally like to see per-file ACLs (with the side effect you describe), but I'd like to see other things first, like server-enforced encryption options, etc. It's not an unreasonable wish-list item. Look at the road map and figure out where it fits in with the other items and the coding cycles available. My guess it, don't hold your breath waiting for this to show up. -- +--+ / todd_le...@unc.edu 919-445-9302 http://www.unc.edu/~utoddl / / A hangover is the wrath of grapes. / +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Getting started with OpenAFS
On 04/30/2010 07:07 AM, Mike Pliskin wrote: - speed is important as well as a way to access any file from anywhere (but ok to degrade speed if replicas aren't synced yet) Your replicas aren't synced yet phrase indicates you're still under the assumption that so many people start with -- that the ro replicas are somehow automatically synced in near real time from the single rw volume. They are not. It's a manual process, or scripted, which pushes the volume image from the rw server to the ro replicas. Think of it more as a publishing operation. Everybody else is reading today's edition of some ro volume, while a few people are preparing the next edition. It could involve lots of changes across the volume that don't make sense separately, but only as a complete set of changes. Once you've got the contents of the rw volume consistent and the changes are complete, then you release that volume. The ro replicas get the changes in one go. If your users are geographically distributed, then the hope is that the ro volume they are using is on a server that is relatively close to them so the data has only to cross slow/long network links once -- during the release operation -- rather than as they read data. Writes/updates still have to get to the single rw volume across whatever network hops it takes. Note also that the path to a rw volume will differ from the path to its ro replicas (unless you've configure the client to use only rw volumes throughout your cell). Todd ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Modifying the output of vos commands to include server UUIDs
On 04/13/2010 09:26 AM, Jeffrey Altman sent: An example of vos examine -printuuid output: [...] An example of vos listvldb -printuuid output: [...] One alternative output format that could be used when the -printuuid option is specified is found below. vos examine -printuuid: [...] vos listvldb -printuuid: Please offer your opinions. Clearly the multi-line form is easier for humans to read, and the related-data-on-one-line form is far simpler for scripts to parse. By far. In both cases. Is there a place on the ballot to vote for... both, with a switch? Otherwise, I don't care. I'm screwed sooner or later either way. -- +--+ / todd_le...@unc.edu 919-445-9302 http://www.unc.edu/~utoddl / / In democracy it's your vote that counts; / / In feudalism it's your count that votes. / +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Shared r/w access to numerous sqlite databases: an appropriate application for AFS?
On 04/07/2010 10:10 PM, Brandon Simmons sent: I have a web application in which I would like many client web-servers to be able to read and write to many separate and modestly-sized sqlite databases, exported by a master server. Each database corresponds to an account, so we might have several concurrent users accessing an individual DB every few seconds. I have been testing with NFS and haven't had any problems but I'm concerned with issues of file-locking and caching problems, which the whole internet seems to be warning about. Would AFS be appropriate for this? In a word, no. If your multiple clients were on the same host, then that host could enforce the locking sqlite attempts, but from multiple hosts you lose. I happen to be facing exactly that same problem at the moment, so I'm hopeful (doubtful, but hopeful) someone will step up and prove me wrong. Thanks for any advice, Brandon Same here, only not the Brandon part. -- +--+ / todd_le...@unc.edu 919-445-9302 http://www.unc.edu/~utoddl / / He had delusions of adequacy. - Walter Kerr/ +--+ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Shared r/w access to numerous sqlite databases: an appropriate application for AFS?
On 04/08/2010 04:06 PM, Brandon Simmons wrote: For instance I envision a handful of clients on different machines each writing to a single sqlite DB every few seconds; would this defeat AFS's caching scheme? Thanks for the thoughtful responses. Every few seconds your cached data is going to be invalidated, which will make sure you server stays thoroughly utilized. Sqlite shines when you don't need a data base daemon running on some central server and access requirements fit file ACLs. You don't have that. You need to distribute at a higher level. Bite the bullet and set up a real db you can connect to and write from multiple clients, and let it do what it's designed to do: arbitrate writes to maintain data integrity. AFS doesn't solve this problem (nor does NFS). Sorry. -- todd_le...@unc.edu ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] user-visible change suggestion for fs setacl
On 12/17/2008 04:02 AM, Felix Frank wrote: On Wed, 17 Dec 2008, Erik Dalén wrote: On Wed, Dec 17, 2008 at 03:09, Stephen Joyce step...@physics.unc.edu wrote: On Tue, 16 Dec 2008, Tom Maher wrote: What's the semantics for negative ACLs? For example, fs sa . system:authuser rl fs sa . badguy +rl -negative I'm guessing that'll give badguy negative rl bits. Makes sense to me. Should 'fs sa . badguy -rl' implicitly give him negative rl bits, if he doesn't have anything already? That doesn't make sense to me. I'd suggest that -perm should never add permissions, only remove. So it should just clear the perms if they're set and do nothing if not. To add the negative flags, do what you suggested above. My $0.02. Sounds very reasonable to me. My vote for implementing it like this. Still doesn't feel devoid of ambiguity, though: fs sa . user +rl -negative# sets negative bits fs sa . user -rl -negative# takes away negative bits? fs sa . user -rl# takes away both negative and positive bits? # or positive only? what about neg. then? To add more confusion, I find another model conceivable: fs sa . user +a # always removes negative bit, adds positive bit fs sa . user -a # always sets negative bit, removes positive bit the drawbacks being painfully obvious. In all, with ACLs having one degree of higher complexity than unix permissions, there probably is no way to make this syntax 100% intuitively akin to chmod's. Thus, the original proposal to use postfix +/- might communicate the distinction? Regards Felix Doesn't seem ambiguous to me at all. If you don't say -negative, you aren't messing with the negative ACLs; If you do, you're leaving the positive ACLs alone. I'm pretty sure most folks are not even aware of negative ACLs anyway, and those who use them intentionally are (I'm guessing) extremely rare creatures. My two cents. -- todd_le...@unc.edu ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Openafs broken on Ubuntu Hardy ?
On 10/14/2008 04:24 PM, Madhusudan Singh wrote: On Tue, Oct 14, 2008 at 12:12 PM, Jeffrey Altman [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Madhusudan Singh wrote: :$ cd /afs/YYY.edu/users/X/Y/Z/XYZABC bash: cd: /afs/YYY.edu/users/X/Y/Z/XYZABC: Permission denied This look like the user you authenticate as, simply doesn't have the required permissions to access the directory. Impossible. I can ssh into the server with the same username and password without any issues. Does your local client have a token for the XYZABC user? I was dearly hoping to avoid such questions by mentioning this rather prominently (along with o/p's of tokens and klist) in my initial post on the thread. Please refer to that. Patience, friend. We're only trying to help. Do you want your problem fixed or what? Jeffrey's a busy guy; whether he'll have the time to refer back to old messages is not to be taken for granted. I'm not such a busy guy, so I did refer to your initial post. It contains much detail that many initial reports/requests for help often lack. However, crucial information is also obviously obfuscated. You're clearly aware of many issues that could be related to the problem, and no doubt you've taken great care in providing relevant info while covering your privacy issues. However, given that you don't know what's causing the problem, it's possible that the key piece of missing info was destroyed in the obfuscation process. (Granted, I didn't see any likely candidates while reviewing your initial post again.) So, I have no solution to offer, no theory to explain the phenomenon you're seeing. But I will offer this advice: When people are offering to help, and ask a simple question, it may well be worth the time for you, a single individual, to repeat the relevant test and provide the new output rather than expecting N-1 list subscribers to go back and review your initial post. What you've described can be explained exactly by having the wrong tokens (which is what I'm betting on still, even having reviewed you i/p). Double checking never hurts. [Almost relevant aside: This afternoon, my reasonably intelligent neighbor asked me to come over and see if I could coax her new monitor into working again. She'd tried everything. Turns out both ends of the power cable have to be firmly plugged in. When you finally crack this problem, you're likely to find it's something like that.] I'll stop preaching and ask the only other thing that comes to mind. Is your home volume on a different server from the volumes above it? If so, how far off are the clocks on the parent server, your home volume server, and your client? (Probably not relevant, maybe not even a possible source of the problem, but if it's not wrong tokens, you gotta look elsewhere...) -- [EMAIL PROTECTED] ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] AFS and Apache Virtual Directory
If you are running with SELinux enabled, try disabling SELinux and see what happens. I know Apache on Fedora Core 4 out of the box won't follow symlinks in AFS. You can make it do that, but you have to convince it you really want to. Disabling it is just for testing to see if that's the issue. -- [EMAIL PROTECTED] Suman Kansakar wrote: Thank you all for your suggestions. After fiddling around with it a little bit and checking for all the suggested fixes, I realize that my problem is more of my Apache 2.0 installation problem than AFS authentication at this point. I do have the FollowSymLinks option set on my configuration, however Apache refuses to do that. I even created a symlink to a directory on the same filesystem and Apache still gives me permission denied error. I'll keep working on it this weekend and see what gives. Happy Thanksgiving! Suman On 11/22/05, Russ Allbery [EMAIL PROTECTED] wrote: zeroguy [EMAIL PROTECTED] writes: Frank Burkhardt [EMAIL PROTECTED] wrote: Currently I use a skript refreshing the token for apache's UID at a regular basis ( su - www-data make_token_from_keytab apache.keytab ) but I'm going to put Apache in a PAG ASAP. We just altered our apache init script to run in a PAG (start the script with #!/usr/bin/pagsh), and run another script in the background that just refreshes the credentials every so often (using a keytab). I.e. sleep 21600, kinit, aklog, repeat. http://www.eyrie.org/~eagle/software/kstart/ may be of interest. -- Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Pro's Con's of /usr/local on AFS....
Norman, There's nothing wrong with a short name symlink to your cell's fully qualified path per se, but it's incredibly difficult to keep it from creeping into places is shouldn't be: hard-coded paths in package builds, scripts, documentation and services that might eventually be available to -- but broken for -- people coming in from foreign cells. If you don't believe me, and your cell has been up for any length of time (like since before breakfast), try deleting that link for a day and see how much stuff quits working. That's what you're going to run into when your visit another site and say to your new friends there Let me show you how we do thus-n-such in our cell, authenticate to your cell, cd there, and try to run something. Been there, got the t-shirt. It doesn't make for a good demo. -- [EMAIL PROTECTED] (who first posted the line below that Jim Rees replied to, not that I'm proud of it. Ugh, on the contrary.) Norman P. B. Joseph wrote: Excuse me for being dense, (and I was in one of those Transarc training classes back in the day), but what's the harm in that symbolic link? -norm On Wed, 2004-10-27 at 11:34, Mitch Collinsworth wrote: On Wed, 27 Oct 2004, Jim Rees wrote: '/afs/isis' is a symbolic link, leading to a mount point for volume 'root.cell'. So you broke one of the most important features of afs, the global name space. Why? Huh? Transarc trainers specifically taught to do exactly this 15 years ago. The recommendation was there to still use the FQDN in all symlinks, scripts, etc. But it was recommended to have it for typing ease. Remember /afs/tr ? -Mitch ___ OpenAFS-info mailing list [EMAIL PROTECTED] https://lists.openafs.org/mailman/listinfo/openafs-info ___ OpenAFS-info mailing list [EMAIL PROTECTED] https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] 'split' a tree of directory into volumes?
Michael, You may have to wrap a script around it to handle the details, but you may end up using the 'up' utility to get the data into the volumes. 'up' is a recursive 'cp' that knows about and copies ACLs as well as files. Hardly anybody knows about it though, and fewer use it. It also knows about mount points (at least the recent OpenAFS version does on UNIX; the Windows versions probably doesn't, and the Transarc AFS 'up' doesn't) so if you happen to have volumes mounted in your tree they don't get copied, just the mount points. It shouldn't be hard to come up with a script that creates a volume based on the directory name, mv the old 20020101 dir for example to .20020101, mount the new volume at 20020101, use 'up' to copy the data w/ ACLs, and if everything looks good, remove the .20020101 directory and it's contents -- watching out for mount points of course, else you might clobber more than you mean to. 'up' can only copy files that you have read access to, so if ACLs are restrictive you might have to become a system:administrator, but that doesn't sound like your problem. On the other hand, if neither ACLs nor mount points in your subtree are issues, then you can just use 'mv' instead of 'up'. You'll still have to deal with creating and mounting appropriately named volumes in either case. Hope this helps a little. If you create something beautiful, share it! I'm sure this problem will come up again, if not for you then for somebody else. Good Luck, -- [EMAIL PROTECTED] Michael Loftis wrote: OK I have a large number of directories, in this case a dated archive, it's all been in one volume for quite some time, but now that volume is unmanageably large. Going forward I can easily vos create and fs mkmount, but for the existing stuff I was wondeirng if anyone wrote a scrip to take a tree of directories and split them into volumes? If not I'll hack something together, just seems like someone already did this Currently it looks something like this: volume dirs archvol--+20020101 +20020102 +20020103 ... I want to split all those dirs into their own volumes then fs mkmount them back into their original places so I end up with a number of much smaller volumes. Thanks in advance all!!! -- Undocumented Features quote of the moment... It's not the one bullet with your name on it that you have to worry about; it's the twenty thousand-odd rounds labeled `occupant.' --Murphy's Laws of Combat ___ OpenAFS-info mailing list [EMAIL PROTECTED] https://lists.openafs.org/mailman/listinfo/openafs-info ___ OpenAFS-info mailing list [EMAIL PROTECTED] https://lists.openafs.org/mailman/listinfo/openafs-info