On 6/23/11 12:08 AM, Sebastian Schaffert wrote:
Am 22.06.2011 um 23:01 schrieb Lin Clark:
On Wed, Jun 22, 2011 at 9:33 PM, Sebastian
Schaffert<[email protected]> wrote:
Your complaint sounds to me a bit like "help, too many clients access my data".
I'm sure that Martin is really tired of saying this, so I will reiterate for
him: It wasn't his data, they weren't his servers. He's speaking on behalf of
people who aren't part of our insular community... people who don't have a
compelling reason to subsidize a PhD student's Best Paper award with their own
dollars and bandwidth.
And what about those companies subsidizing PhD students who write crawlers for
the normal Web? Like Larry Page in 1998?
Agents can use Linked Data just fine without firing 150 requests per second at
a server. There are TONs of use cases that do not require that kind of server
load.
And what if in the future 100.000 software agents will access servers? We will
have the scalability issue eventually even without crawlers, so let's try to
solve it. In the eyeball web, there are also crawlers without too much of a
problem, and if Linked Data is to be successful we need to do the same.
Yep!
We have a problem, it isn't necessarily wide spread yet, let's use the
very technology we are supporting to solve the problem.
WebID is 100% AWWW DNA, that's why its such a natural and unobtrusive
solution to this inevitable Linked Data exploitation problem.
Greetings,
Sebastian
--
Regards,
Kingsley Idehen
President& CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen