The way AFS has implemented data structure extensions in the past is by implementing a new RPC with a new data structure. It would be reasonable for the new RPC to be a multi-roundtrip implementation which includes the total number of servers available and the client would ask for up to NMAXNSERVERS starting at a particular index value.
The old clients would use the existing RPC and the new clients would
attempt the new RPC and fallback to the old RPC if it were not available.
Jeffrey Altman
Matthew Andrews wrote:
> Do you really need to send a flag to the client? how about if the client
> gets a response with 13 entries, it just retries the call expecting
> additional servers. if the client gets back the same list as before,
> then it knows there are only 13 servers. repeat untill you get a list
> with less than 13 entries, or the same list twice. It wastes a little
> bandwidth, but only in the case where a "many replicas" aware client is
> getting info about a volume that actually has exactly 13 replicas(or a
> multiple therof).
>
> -Matt
>
> Harald Barth wrote:
>
>> Looks to me that the number of volume copies is limited to 13 by
>>
>> struct nvldbentry {
>> ...
>> afs_int32 serverNumber[NMAXNSERVERS]; /* Server # for
>> each server that holds volume */
>> afs_int32 serverPartition[NMAXNSERVERS]; /* Server
>> Partition number */
>> afs_int32 serverFlags[NMAXNSERVERS]; /* Server flags */
>> ...
>> }
>>
>> and the same in struct uvldbentry. The question is what to do if I
>> want to extend this to more. A collegue of mine has a vague memory
>> that back in the times when IBM did AFS courses some IBM person said
>> that it was possible to have more than 13 copies anyway, but as I read
>> the source this must be either some misunderstanding or some other
>> source ;-) Or do you have more info than I do?
>>
>> Anyway, if a company is big enough, they want to have a RO copy in
>> every "site" on the continent or the world or whatever and if they
>> then have more than 13 "sites", this is the situation they are in.
>>
>> I have here some speculations how to extend this and I'd be happy if
>> you would help me with that.
>>
>> The first idea is to make a bigger array but that is not very
>> compatible as it is "in the middle" of the struct. It is very mych
>> wasting space if you have only say 2 servers, too. So I think it would
>> be better to be able to flag to the cache manager that there are more
>> servers available so that the client in the next call will ask for
>> server 14-26 and so on. This has to be flagged somehow, either by
>> a special last entry or a flag field.
>>
>> Backwards compat: As allways there is the question of "how backwards
>> compat is this"? People astonish me allways by using really old
>> clients. I astonished myself one month ago by booting a Sun4/280
>> (SunOS4) and the original AFS "just worked".
>> My idea here is to keep the first sent struct {n,u}vldbentry "as is"
>> so that old clients just work as usual with the first 13 volumes. They
>> don't need to know about copies 14, 15... This can even be enhanced by
>> presenting the "right" 13 hosts to a client first, for example by
>> choosing network wise near hosts depending on the cache manager
>> asking.
>>
>> Is this a feasible idea?
>>
>> Is this feature wanted for OpenAFS?
>>
>> Would such a patch/enhancement be incorporated into OpenAFS?
>>
>> I see the following steps:
>>
>> * Make vldb handle more than 13 hosts in the database.
>> * Make vldb present a choice of 13 hosts to the cm.
>> * Make cm to ask for more than 13 hosts from the vldb.
>>
>> How hard do you think this steps will be to implement?
>>
>> Ehm, if anyone says "it has allready been done" and tells me how, I
>> would not be unhappy at all.
>>
>> Harald.
>>
>> _______________________________________________
>> AFS3-standardization mailing list
>> [email protected]
>> http://michigan-openafs-lists.central.org/mailman/listinfo/afs3-standardization
>>
>>
>>
>>
>>
>
>
> _______________________________________________
> AFS3-standardization mailing list
> [email protected]
> http://michigan-openafs-lists.central.org/mailman/listinfo/afs3-standardization
>
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ AFS3-standardization mailing list [email protected] http://michigan-openafs-lists.central.org/mailman/listinfo/afs3-standardization
