-----Original Message-----
From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] On 
Behalf Of bugzi...@jessica.w3.org
Sent: Friday, November 19, 2010 4:16 AM

>> Just looking at this list, I guess I'm leaning towards _not_ limiting the
>> maximum key size and instead pushing it onto implementations to do the hard
>> work here.  If so, we should probably have some normative text about how 
>> bigger
>> keys will probably not be handled very efficiently.

I was trying to make up my mind on this, and I'm not sure this is a good idea. 
What would be the options for an implementation? Hashing keys into smaller 
values is pretty painful because of sorting requirements (we'd have to index 
the data twice, once for the key prefix that fits within limits, and a second 
one for a hash plus some sort of discriminator for collisions). Just storing a 
prefix as part of the key under the covers obviously won't fly...am I missing 
some other option?

Clearly consistency in these things is important to people don't get caught off 
guard. I wonder if we just pick a "reasonable" limit, say 1 K characters (yeah, 
trying to do something weird to avoid details of how stuff is actually stored), 
and run with it. I looked around at a few databases (from a single vendor :)), 
and they seem to all be well over this but not by orders of magnitude (2KB to 
8KB seems to be the range of upper limits for this in practice).

Thanks
-pablo


Reply via email to