On Thu, Dec 4, 2008 at 5:38 PM, David Teigland <[EMAIL PROTECTED]> wrote:
> On Thu, Dec 04, 2008 at 04:59:23PM -0500, david m. richter wrote:
>> ah, so just to make sure i'm with you here: (1) gfs_controld is
>> generating this "id"-which-is-the-mountgroup-id, and (2) gfs_kernel
>> will no longer receive this in the hostdata string, so (3) i can just
>> rip out my in-kernel hostdata-parsing gunk and instead send in the
>> mountgroup id on my own (i have my own up/downcall channel)?  if i've
>> got it right, then everything's a cinch and i'll shut up :)
>
> Yep.  Generally, the best way to uniquely identify and refer to a gfs
> filesystem is using the fsname string (specified during mkfs with -t and
> saved in the superblock).  But, sometimes it's just a lot easier have a
> numerical identifier instead.  I expect this is why you're using the id,
> and it's why we were using it for communicating about plocks.

yes, the numerical id gets used a lot in my pNFS stuff, where the
kernel needs to make upcalls, of which some then get relayed over
multicast -- so, I've just been stashing that in the superblock.
thanks for clearing up my questions.


> In cluster1 and cluster2 the cluster infrastructure dynamically selected a
> unique id when needed, and it never worked great.  In cluster3 the id is
> just a crc of the fsname string.
>
> Now that I think about this a bit more, there may be a reason to keep the
> id in the string.  There was some interest on linux-kernel about better
> using the statfs fsid field, and this id is what gfs should be putting
> there.

interesting; that'd be cool.  i've been meaning to look at statfs more
often in my stuff anyway.


>> say, one tangential question (i won't be offended if you skip it -
>> heh): is there a particular reason that you folks went with the uevent
>> mechanism for doing upcalls?  i'm just curious, given the
>> seeming-complexity and possible overhead of using the whole layered
>> netlink apparatus vs. something like Trond Myklebust's rpc_pipefs
>> (don't let the "rpc" fool you; it's a barebones, dead-simple pipe).
>> -- and no, i'm not selling anything :)  my boss was asking for a list
>> of differences between rpc_pipefs and uevents and the best i could
>> come up with is the former's bidirectional.  Trond mentioned the
>> netlink overhead and i wondered if that was actually a significant
>> factor or just lost in the noise in most cases.
>
> The uevents looked pretty simple when I was initially designing how the
> kernel/user interactions would work, and they fit well with sysfs files
> which I was using too.  I don't think the overhead of using uevents is too
> bad.  Sysfs files and uevents definately don't work great if you need any
> kind of sophisticated bi-directional interface.

great, thanks -- always good to get folks' anecdotal advice and keep
it in my toolbag for later.

cheers,

  d
  .

>
> Dave
>
>

Reply via email to