-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

This new version addresses all my comments, and is really better at explaining
how Redir works.

Thanks.

On 02/23/2013 12:59 AM, Jouni Mäenpää wrote:
> Hi,
> 
> The comments below have now been addressed in version -08 of the draft. The
> changes in the new version include:
> 
> - The temporary local caching mechanism for RedirServiceProvider records
> fetched during a service lookup operation that is described below was added
> to the draft - The text was clarified further - New examples were added to
> clarify how ReDiR tree nodes are numbered and how intervals are assigned to
> tree nodes
> 
> Regards, Jouni
> 
> -----Original Message----- From: [email protected]
> [mailto:[email protected]] On Behalf Of Jouni Mäenpää Sent: 18.
> helmikuuta 2013 21:25 To: Joscha Schneider Cc:
> [email protected]; [email protected] Subject: Re: [P2PSIP] WGLC
> for draft-ietf-p2psip-service-discovery-06
> 
> Hi Joscha,
> 
> Thanks for the comments! Answering the first comment inline, need some more
> time to check the second comment.
> 
> Regards, Jouni
> 
> -----Original Message----- From: Joscha Schneider
> [mailto:[email protected]] Sent: 18. helmikuuta 2013 13:21 To:
> Jouni Mäenpää Cc: Marc Petit-Huguenin; [email protected];
> [email protected] Subject: Re: [P2PSIP] WGLC for
> draft-ietf-p2psip-service-discovery-06
> 
> two comments inline
> 
> regards joscha
> 
> 
> Am 16.02.2013 18:55, schrieb Jouni Mäenpää:
>> Hi Marc and Joscha,
>> 
>> Thanks for the comments! Answers inline.
>> 
>> Regards, Jouni
>> 
>> -----Original Message----- From: [email protected]
>> [mailto:[email protected]] On Behalf Of Joscha Schneider Sent: 15.
>> helmikuuta 2013 12:30 To: Marc Petit-Huguenin Cc:
>> [email protected]; [email protected] Subject: Re: [P2PSIP] WGLC
>> for draft-ietf-p2psip-service-discovery-06
>> 
>> I can confirm that the draft might need some improvements to make the
>> implementation easier. I did a basic implementation but I'm not quite
>> sure that I handled everything correct.
>> 
>> [Jouni]: I'll try to improve the text in the next version of the draft.
>> 
>> Especially the definition of the successor seams a bit unclear for me. A
>> simple example: Only a single service provider with Node-ID 2 provides a
>> service. Node with ID 7 performs a lookup... How should it be handled? I
>> implemented it as follows: in case a lookup reveals only a single service
>> provider it must be the direct successor.
>> 
>> [Jouni]: What would happen in this case is that the upward walk of the
>> service lookup reaches the root level because no successor can be found
>> from the lower levels in the tree. In my implementation, when this
>> happens, I'm selecting either the closest successor at the root level or,
>> if there is no successor, select one of the available service providers
>> randomly (or pick the only service provider if there is only one like you
>> are doing in your implementation). Anyway, I'll add text to clarify this
>> to the next version of the draft.
>> 
>> Further notice, due to the periodically triggered re-registration the
>> consistency of the ReDiR tree can not be always ensured. Theoretically
>> this can lead to failed lookup processes. This derives from the fact that
>> each new service provider registration might affect the re-registration
>> of the former service providers which again might affect the
>> re-registrations of other service providers.
>> 
>> [Jouni]: Not sure about this. I could be wrong, but why would the
>> re-registration of a service provider affect the re-registrations of
>> other service providers? I mean, when a service provider X re-registers
>> or registers, it simply stores its own record at different levels in the
>> ReDiR tree as a part of the upward and downward walks. This
>> re-registration process does not influence the (re-)registrations of
>> other service providers. Or did I understand your comment incorrectly?
> 
>> I'll try to make an example: Two service provider are even in Level 4 in
>> the same interval. First service prover (X) stops registration downwalk
>> at level 2 because it's the only one. Second service provider (Y) goes
>> down to level 3 to finish its downwalk. X starts re-registration. Now the
>> downwalk need to go down to level 4 as level 2 and 3 are shared Y.
> Now Y starts re-registration. Goes down to level 5 as level 3 and 4 are
> shared now. X re-registers again. goes down to level 5. Finally both
> service providers have found the leaves and the tree is consistent. In
> between, service lookups might go down to a level at which no service
> provider information is stored (yet).
> 
> [Jouni]: Ok, I think you're right, that can happen. How often do you think
> it would occur? The default branching factor of the ReDiR tree is 10. I
> guess that the Node-IDs of the service providers that are (re-)registering
> at the same time would need to be very close to each other in order for the
> service providers to end up into the same interval at two levels of the
> tree (with the default branching factor of 10, level 2 has 1000 intervals,
> level 3 10000, level 5, 100000, etc.).
> 
> I guess there might also be other similar situations, though, such as when
> a RedirServiceProvider record expires within a given interval at some level
> just before a service lookup for which that record is the closest successor
> reaches that level and interval, and before the re-registration stores a
> new RedirServiceProvider record in that interval. This situation should be
> quite rare, though.
> 
> One potential strategy for dealing with the situations above is to fail the
> service lookup procedure and retry it - if the temporary inconsistency has
> been fixed by a re-registration between the old and new service lookup, the
> new service lookup will succeed.
> 
> Another strategy that we are using in our ReDiR implementation is that we
> are temporarily (for the duration of a service lookup) caching/storing the
> RedirServiceProvider entries fetched during that specific service lookup at
> the peer that is carrying out the service lookup. Thus, if for whatever
> reason the service lookup would happen to go down to a level at which no
> service provider information is stored, the peer that is carrying out the
> search can go through the locally cached RedirServiceProvider entries to
> find the closest successor of the search key from among the cached entries.
> This strategy would allow one to recover from the scenarios described
> above. Do you think it would help if we described this strategy in the
> draft? Do you have some other potential solutions in mind?
> 
>> more inline...
>> 
>> Regards Joscha
>> 
>> Am 14.02.2013 19:59, schrieb Marc Petit-Huguenin:
> I did a review of this draft, and I have some concerns.
> 
> First of all some parts of the I-D are verbatim copy of the text in the
> original paper.  Is that OK?
>>> [Jouni]: I guess the main reason for that is that ReDiR is pretty
>>> complex to describe. So we took a safe bet and tried to reuse some of
>>> the text (the algorithm description) from the paper. But since there
>>> are also comments that the text is difficult to follow, I'll make an
>>> attempt to reformulate it in the next version of the draft.
>>> 
> Probably because some of the text comes from a research paper, it was very
> difficult to understand fro me, and I am not sure that I yet understood
> everything - I still have to write an implementation of this, and
> unfortunately to not have enough time to do so before the end of the WGLC.
> On the other hand, I know that RELOAD.NET has an implementation, so I guess
> it is implementable.
>>> [Jouni]: I have also implemented the draft and think I got the
>>> implementation right. So if you have any further questions about things
>>> that are unclear, let me know and I can check the code to see how that
>>> specific thing was implemented (and clarify the same issue in the draft
>>> if necessary).
>>> 
> But I was not able to make sense of something in the example in section 7: 
> Why is the 4th peer added to level 0? Bullet 4 in Section 4.3 says "Node N
> MUST continue [repeating steps 2 and 3] until it reaches either the root or
> a level a which n.id is not the lowest or highest Node-ID in the interval
> I(level, n.id)".  In this case 4 is not the lowest or highest Node-ID in
> the interval (lowest is 2, highest is 7), so why is it added to this node?
>>> I think the example is simply following the rules. At level 1 peer 4 is
>>> the lowest. So go up and fetch and store.
>>> 
>>> [Jouni]: That's correct. Node 4 starts from the starting level, which
>>> is level 2. It stores its record on level 2. Since Node-ID 4 is the
>>> lowest (only) Node-ID in its interval, the upward walk continues to
>>> level 1. At level 1, Node-ID 4 is also the lowest Node-ID in its
>>> interval and thus the upward walk continues all the way to the root
>>> level. Node 4 stores its record in that level. Since Node-ID is neither
>>> the lowest nor highest Node-ID, the upward walk stops at level 0
>>> (although it would stop anyway at level 0 since it is of course not
>>> possible to go further up in the tree).
>>> 
> But if i reveal correct that fact caused a few headaches for me too. Why
> does the store does not depend on the information that was fetched before?
>>> [Jouni]: That is how it goes - if there has been a decision that the
>>> upward walk shall continue to the next level, a record is stored at
>>> that level 'automatically', regardless of the contents of the tree
>>> node. The contents of the tree node (i.e., whether n.id is sandwiched
>>> or not) will influence the decision on whether to stop the upward walk
>>> or continue it. The idea in the upward and downward walks is to ensure
>>> that the tree is populated densely enough so that service lookups will
>>> finish without requiring too many Fetch operations.
>> But does this make sense? If I recall correct the registration
>> information of node 3 and 4 in Level 0 in the draft example will never be
>> used in lookup processes. As far as I understood at most two service
>> provider entries are needed (lowest and highest) in each tree node to
>> make the algorithm work. All sandwiched service provider information is
>> not needed. Most of them will also expire and not be renewed in the
>> re-registration process in case other service providers have registered
>> in the meantime. I think this 'automatic' store process makes the tree
>> just a bit more wired and the algorithm mode difficult to understand. Or
>> do you see a resonable reason for this behaviour that I don't see.
> BTW a similar example for the service lookup would be useful.
>>> [Jouni]: Ok, I'll add an example in the next version of the draft.
>>> 
> More comments:
> 
> - Section 3, 3 paragraph: "contains a list of Node-IDs"
> 
> Technically each node is a Dictionary whose keys are Node-IDs and values
> contain a list of Destinations.
>>> [Jouni]: Ok, I'll modify this in the next version of the draft.
>>> 
> - Section 4.1
> 
> s/detination_list/destination_list/
>>> [Jouni]: Ok, will change this one also in the next version of the
>>> draft.
>>> 
> - Section 4.1
> 
> namespace is an opaque value but the charset and conversion between 
> character string and byte string for the namespace is not defined.
>>> confirm
>>> 
>>> [Jouni]: Would specifying that it is an opaque UTF-8 encoded string be
>>> enough?
>>> 
> - Section 8
> 
> The document says that the redir namespace is added to the 
> <mandatory-extension> element, meaning that all nodes MUST understand 
> ReDIR, but isn't that against one of the goal of ReDIR, which is that by
> using standard Store/Fetch, only a node wishing to store or fetch has to
> implement ReDIR?
>>> I think at least the RedirServiceProvider Data Structure must be
>>> supported. And also the Access Control Rules. But the algorithm might
>>> not be mandatory  needed.
>>> 
>>> The RedirServiceProvider Data Structure does not need to be understood
>>> by the peer storing it, but you are right about the Access Control
>>> rule.  My own draft about storing the Access Control rule solves this
>>> problem but, even if it is accepted as WG item, we do not want to add a
>>> normative reference to it.
>>> 
>>> So I withdraw what I said - redir needs to be a a mandatory extension
>>> at least until new access control policies no longer have to be
>>> hardcoded.
>>> 
>>> [Jouni]: Ok, so if I understood correctly, it is ok to leave the text
>>> as it is.
>>> 
> - Section 10.3
> 
> I think that we need a bit more explanation on what the turn-server and
> voice-mail service providers are.
>>> [Jouni]: I could remove the voice-mail service provider from the next
>>> version of the draft as I don't have a good explanation for that. For
>>> turn-server I will add some text.
>>> 
> On 01/31/2013 03:11 PM, Carlos Jesús Bernardos Cano wrote:
>>>>> Hi,
>>>>> 
>>>>> Hereby we are issuing a WGLC for
>>>>> draft-ietf-p2psip-service-discovery-06.
>>>>> 
>>>>> The WGLC will be open till the 15th of February. We kindly ask the
>>>>>  WG to review the document and provide comments.
>>>>> 
>>>>> If you have no comments and think the document is ready to be 
>>>>> submitted to IESG, please do send a note stating that to the WG
>>>>> ML.
>>>>> 
>>>>> Additional information about the document is below:
>>>>> 
>>>>> Title           : Service Discovery Usage for REsource LOcation
>>>>> And Discovery (RELOAD) Author(s)       : Jouni Maenpaa Gonzalo
>>>>> Camarillo Filename        :
>>>>> draft-ietf-p2psip-service-discovery-06.txt Pages : 15 Date
>>>>> : 2012-10-01
>>>>> 
>>>>> Abstract: REsource LOcation and Discovery (RELOAD) does not define
>>>>> a generic service discovery mechanism as part of the base
>>>>> protocol. This document defines how the Recursive Distributed
>>>>> Rendezvous (ReDiR) service discovery mechanism used in OpenDHT can
>>>>> be applied to RELOAD overlays to provide a generic service
>>>>> discovery mechanism.
>>>>> 
>>>>> 
>>>>> The IETF datatracker status page for this draft is: 
>>>>> https://datatracker.ietf.org/doc/draft-ietf-p2psip-service-discovery
>>>>>
>>>>>
>>>>> 
There's also a htmlized version available at:
>>>>> http://tools.ietf.org/html/draft-ietf-p2psip-service-discovery-06
>>>>> 
>>>>> 
>>> _______________________________________________ P2PSIP mailing list 
>>> [email protected] https://www.ietf.org/mailman/listinfo/p2psip
>> _______________________________________________ P2PSIP mailing list 
>> [email protected] https://www.ietf.org/mailman/listinfo/p2psip
> 
> _______________________________________________ P2PSIP mailing list 
> [email protected] https://www.ietf.org/mailman/listinfo/p2psip 
> _______________________________________________ P2PSIP mailing list 
> [email protected] https://www.ietf.org/mailman/listinfo/p2psip
> 

- -- 
Marc Petit-Huguenin
Email: [email protected]
Blog: http://blog.marc.petit-huguenin.org
Profile: http://www.linkedin.com/in/petithug
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQIcBAEBCAAGBQJRLRPDAAoJECnERZXWan7ESQAQAKp49csNIYytMJ9NdACdQBTW
S6MXxhymlmDWOjsj6OjYUKb1NOFbT5pbs4sM5eJ22v+DyTWOoQ/YWu+hnShPL9zV
pfOgn66TzL2KsAI0xkKB7QhyYIUmH9ODrf3NPcs8u3FUw4exXGInh9Y96hq0pMfD
KeH/UTVOQ4W2O860xpogWaoGOJ89fhqRT9B4HD36uw97MD/hh5KbZ9i7/RhxhtKa
G0ayNXzyxziql3LRMBe8ty31+/T4/L6JK9TJVLrK1xnEg2yRmQslbFgVeNQhd3U6
uEIPKuDJRP04IKQ3OwLTvmteg3q16GJLMDsTFcD6C0i02701BNwP5WI2XuFXQBEp
JoPtFnAvLS8KrdjgpIO4cXXYfTbjcskRh9261M58iv7o9WGQEefsk/rvp4oVY+xd
Cb+j6uN4dI3DDU7oiYd6yyCNnHrLqgEXZ/PtkhgBhvGmvE50/bHmGbPZjRmYr8JC
8CyUbsmDx66N9BPt9MBH0zQtI+RtWhIzdFYgYSFeyywWrVfAAuXxcuhOPpqq1R3J
pec45/i8XM4nIwL9HudFnNDpL+LU7CWkR08JTpjJb3ObPE5PlKF7hyXe0Q74MSJl
a5skclkWEqoqqjIIz76Cua87B0MlE2sOW6wDfeB9sIlDt1hgaDQyX9pkQrUwbvJ8
TUSqWkHed1zr3ImnQxu3
=Q3Od
-----END PGP SIGNATURE-----
_______________________________________________
P2PSIP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/p2psip

Reply via email to