> Thanks a lot Frank. Pretty-much all my doubts got cleared. I will let you know
> if I get any more :).
> 
> 
>  >> 9) We need to add support (at-least for leases) wherein we pass
>  >> lockowner/clientowner to the back-end server. Currently we pass
>  >> lockowner/clientowner pointers to  FSAL while acquiring
>  >> locks/delegations resp.
>  >> If the nfs-ganesha server restarts then we will have differnt
>  >> lkowner/clientowner structures...how can we reclaim lock state then? Do
>  >> we need to wait till the back-end (gluster) server flushes all the
>  >> existing lock state and then we re-acquire using fresh lock/client
> owners?
>  >
>  > Hmm, let me think on this one. I think there is a larger question here.
> 
> 
> Do you think we should take this up in upstream?

Sure, wouldn't hurt.

Frank

> On 03/23/2016 11:56 PM, Frank Filz wrote:
> >> 1) why do we need this multi-fd support? (IIUC to be able maintain states
> of
> >> OPENs differently..correct?)
> >
> > It allows FSAL to keep track of "stuff" relating to a given client state 
> > (NFS v4
> open stateid, lock stateid, v3 share reservation, or v3 lock pseudo stateid).
> >
> >> 2) where are we allocating my_fd? From that code it is state+1..but where
> >> exactly is it allocated?
> >
> > The FSAL is responsible for allocating sizeof(state_t)+sizeof(my_fd). See
> alloc_state() method.
> >
> >> 3) For NFSv3, we use common fd like before...but for NFSv4 we allocate
> >> unique fd for each state which is unique to each open?
> >> We seem to have fd allocated only if there is state associated with
> open...i.e,
> >> NFSv4 ..but NFSv3 would then use only global fd..What about NLM?
> >
> > V3 I/O currently will just use the global fd. I have a thought that we could
> look for a pseudo stateid and use that instead (share reservation or lock
> stateid).
> >
> >> 4) When do we close this fd? Only in case of CLOSE() op / during errors?
> >
> > The FSAL actually could open and close file descriptors whenever it wants.
> The "stuff" the FSAL associates with a state_t is released when the state_t is
> disposed of (close for example).
> >
> >> 5) If single NFSv4 client does multiple opens. there shall be multiple fds
> open
> >> for each of those opens..Correct? Each of these fds will have
> corresponding
> >> open owner?
> >
> > Yes (assuming different open owner and therefore not open upgrade).
> >
> >> what if there is a single NFS client but with different mounts (one via
> >> v3 and other via v4) trying to access the same file? If the request comes
> via
> >> v4, it shall use uniquefd  and for v3 access it will use global fd.
> >
> > The separate v3 and v4 mounts will result in separate state_t on the server
> (or use of global fd for v3 I/O).
> >
> >> 6) What about lock state...each lock state is associated with an owner?
> >> For a single open fd, if there are multiple locks taken by client..we shall
> take
> >> those locks on the same fd with the lockowner as the one sent by client..
> >>
> >> Typically all those lock owners should be same (client-string) ?
> >> otherwise wouldn't they shall conflict with each other?
> >
> > Yes, stateid associates lock owner with a file, and there could be multiple
> locks on that stateid.
> >
> > Note that the Linux client shares an open stateid for the whole system, but
> there would be a lock stateid per process (POSIX locks). Note that Open File
> Description locks work over NFS and therefore each open file description
> with locks on it would have it's own lock stateid. The multilock tool in
> Ganesha will utilize OFD locks if you want to see this in action.
> >
> >> 7) Apart from passing new state field, how different are the fops2*()
> >> compared to fops() (like open2 vs open etc..)
> >
> > The biggest difference is that open2 supports open/create/setattr,
> basically it can in a single FSAL call do everything an NFS v4 OPEN op can do.
> >
> >> 8) on a different note, is NFS client-id is unique across servers? can they
> ever
> >> clash?
> >
> > Stateid is unique between server and client. In a cluster with IP failover, 
> > we
> want the stateid to be unique across the cluster and thus the ability to get
> nodeid into epoc (Ganesha stateid consists of epoch:clientid counter:stateid
> counter).
> >
> >> 9) We need to add support (at-least for leases) wherein we pass
> >> lockowner/clientowner to the back-end server. Currently we pass
> >> lockowner/clientowner pointers to  FSAL while acquiring
> >> locks/delegations resp.
> >> If the nfs-ganesha server restarts then we will have differnt
> >> lkowner/clientowner structures...how can we reclaim lock state then? Do
> >> we need to wait till the back-end (gluster) server flushes all the
> >> existing lock state and then we re-acquire using fresh lock/client owners?
> >
> > Hmm, let me think on this one. I think there is a larger question here.
> >
> > Frank
> >
> >
> > ---
> > This email has been checked for viruses by Avast antivirus software.
> > https://www.avast.com/antivirus
> >


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


------------------------------------------------------------------------------
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785351&iu=/4140
_______________________________________________
Nfs-ganesha-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to