On Sat, 05 Aug 2000, Henry Hemming wrote:
<>
> In my oppinion you dont need secondary hashing in case of collision, as when
> you have a collision you already have a close enough distrinction, no point
> in having ..001 and ..002 in the hash table refering to the same direction.
> By by controling the size of the hash table you will also be able to control
> the accuracy of your routing. Besides if originally empty hash table
> produces problems you can rehash the hash table until you reach the level of
> accuracy that you want ( I belive the java.util.hash* used this method ),
> and use chaining instead of secondary hashing.

What you are discussing a different, lossy, method for handeling the DataStore.
Calling it a hashtable is a little misleading.

If I understand you correctly, this is basically equivalent to saying that we
divide the keyspace into 500 even sized partitions, and alot one slot in the
DataStore to each interval. When a new key in a given interval arrives, we
simply overwrite the old key in that slot.

Without speculating as to whether this works or not (because nodes are _meant_
to have more entries in certain intervals then others, I don't think it does),
it is obviously very different from the current model of allowing things
entries to die with time.

> >
> > What you are missing is that I am refering to the general problem of keys
> which
> > don't have an order at all, or at least not one that corresponds to the
> > closeness relation.
> >
> > To put it in programmer terms, we would want something that works for a
> Key
> > about which we know nothing of the value, but whose only public method is:
> >
> > boolean isCloser(Key A, Key B)
> By using key comparison you can never archieve anything better than O(log n)
> theoretically for retrieval.

O(log n) is quite fine with me. Nobody has told me how to get it (most people
seem to agree it is not possible).

<>
> > In order to produce maximum entropy, the each 4 byte part of the key is
> xored
> > to the hashcode. This could be changed, but there are much better methods
> to
> > produce ordered lists (binary trees) anyways.
> Well, I was after the hashtable O(1) search, instead of binary trees O(log
> n). Maximum entropy is not archieved by xoring all parts if the parts dont
> originally have equal entropy value. And in my oppinion they dont have in
> this case, as I already explained.

a) The difference between O(log n) and O(1) is hardly enough to be of concern
here.
b) Using the entire key is not going to produce less entropy then using only
part of it.
c) see above.

> >
> > >
> > > --typo
> > >
> > >
> > > _______________________________________________
> > > Freenet-dev mailing list
> > > Freenet-dev at lists.sourceforge.net
> > > http://lists.sourceforge.net/mailman/listinfo/freenet-dev
> > --
> > \oskar
> >
> > _______________________________________________
> > Freenet-dev mailing list
> > Freenet-dev at lists.sourceforge.net
> > http://lists.sourceforge.net/mailman/listinfo/freenet-dev
> >
> 
> --typo
> 
> 
> 
> _______________________________________________
> Freenet-dev mailing list
> Freenet-dev at lists.sourceforge.net
> http://lists.sourceforge.net/mailman/listinfo/freenet-dev
-- 
\oskar

_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to