also, there's no reason to think that the current set of atomic operations is set in stone as the one true set. we're free to add, extend or modify these. the atomic instructions are not widely used in the code yet.
if you do not have some way of logging into plan 9, i can provide you with a login. contact me off list for that. - erik On Friday, March 7, 2014 4:10:48 PM UTC-5, Jessica Yu wrote: > > Thank you erik! > > One characteristic of MCS locks is the fact their implementations are > "lock-free," utilizing atomic operations to manipulate the queue. After a > cursory look through the src I see that test-and-set is used in the kernel > implementation of Locks, and variants of compare and swap in used in a few > other places. What other atomic op interfaces are available in the kernel? > Is there a xchg/fetch-and-store sort of operation I can use to atomically > add links to the queue? Otherwise I think I could probably use cmpswap to > achieve the same thing. > > Pointers to places in the source code (regarding atomic op usage) I should > look at would be tremendously helpful! Two-letter directory names, albeit > short and sweet, are confusing sometimes :-) > > > Jessica > > > > On Wednesday, March 5, 2014 6:00:12 PM UTC-5, quanstro wrote: >> >> first, thanks for your interest. i'm encouraged by the good questions. >> >> yes, i think this description of qlocks, and their friend the rqlock >> interact through the scheduler using ready, and thus cannot be >> used to implement the scheduler. >> >> your description of mcs locks is correct. >> >> locks (spinlocks) do contain a pointer to a Mach virtual machine. >> this is used to ensure consistency, and check for deadlock. >> a hopeful acquiring process may check l->m. >> >> the ticket lock implementation ( >> http://sources.9atom.org/sys/src/nix/port/tiklock.c) >> for the 64-bit kernel makes more extensive use of l->m to check for >> common lock errors, though it uses a slightly different implementation >> of the lock structure. >> >> the reason to keep them processor local is because (a) that assumption >> is useful, and (b) because there is no (plan 9) mechanism for passing a >> pure spin lock between processors. >> >> - erik >> >> On Wednesday, March 5, 2014 3:20:17 AM UTC-5, Jessica Yu wrote: >>> >>> Hello everyone! >>> >>> Happy to hear that Plan 9 was accepted into GSoC this year. I had >>> forgotten to introduce myself in my first email so I'll do so here. >>> >>> I'm currently an undergraduate at UC Berkeley studying Computer Science >>> and one of my core interests is the design and implementation of operating >>> systems. Coming primarily from a Linux background, I find the ideas in Plan >>> 9 refreshing and in many ways a lot more simple and elegant than their *nix >>> counterparts. >>> >>> I've been looking at a handful of gsoc ideas but at the moment I'd like >>> to clarify some points regarding the MCS locks project. >>> >>> It appears that QLocks in the plan 9 kernel are a kind of queueing lock >>> itself, but are *not* to be confused with MCS locks (which also happen to >>> use queues). Although they're not really related I was initially confused >>> when I looked at the source, so I'll attempt to distinguish the two lock >>> types below just to keep them straight in my head..... >>> QLocks: >>> -- are associated with a single lock value/flag >>> -- are blocking (cause a process to block if the lock is unavailable) >>> -- queue would be made of blocked processes >>> >>> MCS locks: >>> -- are spinlocks >>> -- locally spin on a value per node in the queue >>> -- queue is made of nodes with their own local lock flag and a next >>> flag, pointing to the next node/process waiting for the lock. >>> >>> Sound about right? Way off? Did I distinguish the two correctly? >>> Anything missing I should take note? >>> >>> The project blurb also notes that this requires "exploiting the fact >>> that plan 9 locks must be acquired and released on the same processor." I >>> notice that the Lock struct contains a field that is pointer to a Mach, but >>> I'm not sure how this field gets utilized in the existing spin lock >>> implementations in the kernel (I noticed that it gets set in ilock, but not >>> in lock, and also set to nil in their respective unlock functions). How >>> does this requirement get enforced exactly? (and also, this might be a dumb >>> question, but why must they be acquired/released on the same processor?) >>> >>> Many thanks! >>> >>> Jessica >>> >>> -- You received this message because you are subscribed to the Google Groups "Plan 9 Google Summer of Code" group. To unsubscribe from this group and stop receiving emails from it, send an email to plan9-gsoc+unsubscr...@googlegroups.com. To post to this group, send email to plan9-gsoc@googlegroups.com. Visit this group at http://groups.google.com/group/plan9-gsoc. For more options, visit https://groups.google.com/d/optout.