Can't see how this would really work... For example, say a robot makes a change, then a user edits that blip. The document changed event comes from the user, not the robot, but the content of the blip is from the robot.
Or for example a gadget using the data api could change the blip and you get your infinite cycle. All it really does is a top level filtering of events; it doesn't really protect the wave. Surely a better solution is to have this handled entirely by the robot, and have server side anti-spam protection that blocks users for a time period if they spam (eg. cycle) and removes / bans them from the wave if they are repeat offenders. The goal is really that no matter what combination of gadgets and robots get added to a wave, the changes-per-time-period shouldn't exceed a certain threshold limit. I mean, do we really care if there's an infinite loop in a wave which gets one update a minute? Its easy to handle and clean up yourself. The problem is when robots go out of control and start posting 100s of blips a second. ~ Doug. On Nov 3, 2:16 am, Nathanael Abbotts <[email protected]> wrote: > If in the capabilities.xml of the robot (or equivalent), a robot could > specify if it wants to receive events triggered by robots for each event, > then this problem could be fixed. > > For instance, a translation robot wouldn't want to receive blipsubmitted > events from other robots, but a table of contents robot definitely would. > > If even more detail is needed or wanted, a robot could specify only to get > events from specific robots. Here is an extract from an example xml file: > > <w:capabilities> > <w:capability name="ANNOTATED_TEXT_CHANGED" context="ROOT,SELF" > filter="myrobot/annotationname" robots="[email protected], > [email protected]" /> > <w:capability name="BLIP_SUBMITTED" context="ROOT,SELF" robots="" /> > <w:capability name="WAVELET_SELF_ADDED" context="ROOT" robots="*" /> > </w:capabilities> > > Here, we have 3 events. The first one is annotated_text_changed, which is > used by the robot to track specific selections within a blip. The robot only > cares if a human, or one of 2 specific robots, changes this. > > Next, blip_submitted. For this event, the robot is not interested in events > from other robots, so an empty string is provided. > > Finally, wavelet_self_added. Because this robot happens to do an important > action which, if not done, will cause errors in future, the robot wants this > to happen regardless of who added it. > > -- > Nathanael Abbotts > > Email: [email protected] > Wave: [email protected] > Twitter: @natabbotts (http://twitter.com/natabbotts) > > > > > > > > On Tue, Nov 2, 2010 at 07:32, Vega <[email protected]> wrote: > > If it is possible to handle bursts of traffic from arbitrary endpoints > > without falling over - it is great. But what about robot providers? > > Let's take for example a robot running on App Engine. It only takes a > > child to create a new wave, add there some text, then add 2 > > translating robots and let them translate each other until the wave > > explodes (or wave server somehow discovers abuses and... close the > > wave?) These robots will burn a lot of their quota. And it only takes > > a child to cause such abuse. I think translating robot will be > > interested to react only on human events, or on nonhuman events that > > it trusts. Failing to provide such mechanism will totally expose > > robots to abuse, imho. And wave server providers will also pay the > > price in form of handling spam traffic. > > > On Nov 2, 6:50 am, Wim <[email protected]> wrote: > > > Why not? If the robots aim is best served by responding to all > > > messages then why shouldn't it respond to all messages? Imagine a > > > translation bot that embeds replies in a blip translating that blip > > > into selected languages, why should other robots blips be ignored by > > > this bot? > > > > It should be up to the robot to determine what messages it is > > > interested in, first case for almost all robots would be ignoring > > > messages from itself. After that it would have to filter based on > > > what it is setup to do; whether that is checking the content to > > > determine if its control commands are there, or if the author matches > > > the author that added it to the conversation, or .... This is all > > > part of the logic of the robot based off how it is to behave. > > > > Allowing robots to respond to other robots definitely does have the > > > problem of infinite recursion. Something as simple as two echoey > > > robots in the same wave from different servers could cause this > > > problem, or two spell checking robots battling over whether it is > > > spelt "color" or "colour". As Alex said this issue should be dealt > > > with at the server level, maybe servers should have some method to > > > provide both clients and robots with a 'warning' that they are close > > > to being cutoff and then remove them from the wave if they continue > > > spamming it. > > > > This problem should also be dealt with at the robot level as well, > > > something like a spell checking robot should be storing a list of > > > words it has changed in a private wavelet and not trying to change a > > > word a second time, e.g. if you are commenting on the "Color" class of > > > some code and the spell checker changes it to "Colour" you should be > > > able to change it back and the spell checker should ignore the word. > > > > On Nov 2, 3:27 pm, "Gamer_Z." <[email protected]> wrote: > > > > > IMHO this should not be a necessary part of the protocol. Robots > > > > should not be programmed to respond to every message. Doing that > > > > would not have any benefit to the developer because users would very > > > > quickly get annoyed with them. > > > > > On Nov 1, 10:19 pm, Alex North <[email protected]> wrote: > > > > > > In fact, it was an architectural flaw in Google Wave that robots > > could not > > > > > talk to each other. It was never desired behaviour. Our vision was > > that > > > > > robots could do "anything a human can do". Of course didn't quite get > > there, > > > > > but were working towards it. > > > > > > In the current federation protocol there is no means to identify a > > > > > participant as being automated or not. Even if there were, that would > > > > > require trusting arbitrary federated wave servers to correctly > > identify > > > > > their participants. Apart from there being many valid reasons that > > the > > > > > distinction is unnecessary, this would be somewhat like trusting mail > > > > > servers not to send spam. Protection against abuse needs to be done > > > > > elsewhere, possibly imperfectly. > > > > > > There is some concern that two talking robots could enter an infinite > > loop. > > > > > I'm not convinced that this is something we need to design the > > protocol to > > > > > protect against. We should instead implement wave servers such that > > they can > > > > > handle bursts of traffic from arbitrary endpoints without falling > > over, > > > > > perhaps with some kind of throttle. If they detect that some > > (possibly > > > > > federated) participant or server is abusive, it's the receiving > > server's > > > > > call whether to cut them off. No loop is infinite because waves have > > size > > > > > limits: right now there is a size limit built right into the > > protocol, but > > > > > even if there wasn't there would be an effective (if unpredictable) > > size > > > > > limit when some server was no longer able to hold the wave in memory. > > Again, > > > > > a robust server should be able to evict such a wave without going > > down in > > > > > flames. > > > > > > There are some legitimate user experience reasons that it would be > > useful to > > > > > identify robots on some kind of best effort basis. Wave providers may > > wish > > > > > to provide a bunch of standard robots or something, and display some > > > > > indication to their users of the automated nature of these > > participants to > > > > > set the right expectations. But that could only ever be a best effort > > > > > service - they couldn't reliably classify arbitrary participants as > > > > > automated or not. I'm ignoring these use cases for now - such a > > mechanism > > > > > doesn't need to go into the core protocol but would appear in a > > server's > > > > > profile implementation. > > > > > > My 2 cents (ok, maybe a bit more than than) > > > > > Alex > > > > > > On 2 November 2010 05:05, Vega <[email protected]> wrote: > > > > > > > I personally think that the solution should be like this: > > > > > > > 1) Wave servers should be able to mark users as humans and non > > humans > > > > > > 2) The deltas exchanged by Wave servers should include for each > > > > > > participant also its type human/nonhuman > > > > > > 3)Robots should be allowed to receive only events caused by humans, > > > > > > unless > > > > > > 4)Robot(A) specified in its capabilities that it is interested to > > > > > > receive events from other robot (B), and robot (B) specified in its > > > > > > capabilities that it is interested to send events to (A) > > > > > > > Of course such solution will not prevent DOS attacks, but at least > > it > > > > > > will totally prevent scenarios where 2 robots enter infinite loop > > due > > > > > > to bad design or bug. > > > > > > > On Nov 1, 7:58 pm, Vega <[email protected]> wrote: > > > > > > > Wave allows robots to be first order participants in the waves. > > This a > > > > > > > really great feature with huge potential, however, it also might > > lead > > > > > > > to unintentional infinite loops causes by robots responding to > > events > > > > > > > caused by other robots. Google Wave implementation attempted to > > solve > > > > > > > this issue by preventing from robots to receive non human events. > > It > > > > > > > seems that this solution was effective enough in the Google Wave > > > > > > > implementation. > > > > > > > However, for a federated system, such as Wave in a Box - such > > solution > > > > > > > might not be possible even in principle, as there's no way to > > track > > > > > > > whether participant from other federated served is human or > > robot. > > > > > > > Moreover, Google Wave's solution is too restrictive as it makes > > robot- > > > > > > > robot communication nearly impossible to implement and thus > > limits the > > > > > > > robot functionality. > > > > > > > Let us discuss the issue and see what could be possible solution > > > > > > > viable for Wave in a Box. > > > > > > > Please also take a look at [0]. > > > > > > > > [0]http://code.google.com/p/wave-protocol/issues/list?cursor=131 > > > > > > > -- > > > > > > You received this message because you are subscribed to the Google > > Groups > > > > > > "Wave Protocol" group. > > > > > > To post to this group, send email to > > [email protected]. > > > > > > To unsubscribe from this group, send email to > > > > > > [email protected]<wave-protocol%2bunsubscr...@goog > > > > > > legroups.com> > > <wave-protocol%2bunsubscr...@goog legroups.com> > > > > > > . > > > > > > For more options, visit this group at > > > > > >http://groups.google.com/group/wave-protocol?hl=en. > > > -- > > You received this message because you are subscribed to the Google Groups > > "Wave Protocol" group. > > To post to this group, send > > ... > > read more » -- You received this message because you are subscribed to the Google Groups "Wave Protocol" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/wave-protocol?hl=en.
