Am 08.11.2013 18:08, schrieb Junio C Hamano:
> Karsten Blees writes:
>
>> What about this:
>>
>> #define HASHMAP_GROW_AT 80
>> #define HASHMAP_SHRINK_AT 16
>
> I am not too enthused for three reasons. The fact that these are
> 100-based numbers is not written down anywhere other than the place
>
Karsten Blees writes:
> What about this:
>
> #define HASHMAP_GROW_AT 80
> #define HASHMAP_SHRINK_AT 16
I am not too enthused for three reasons. The fact that these are
100-based numbers is not written down anywhere other than the place
they are used, the places that use these need to consistentl
From: "Karsten Blees"
However, defining the constants inversely is a bit unintuitive (i.e.
1.25 instead of 0.8, 6 instead of 0.166). Perhaps the thresholds
should also be calculated once on resize, not on every add / remove.
What about this:
#define HASHMAP_GROW_AT 80
#define HASHMAP_SHRINK
Am 07.11.2013 22:40, schrieb Junio C Hamano:
> Karsten Blees writes:
>
>> +`void hashmap_add(struct hashmap *map, void *entry)`::
>> +
>> +Adds a hashmap entry. This allows to add duplicate entries (i.e.
>> +separate values with the same key according to hashmap_cmp_fn).
>> ++
>> +`map` i
Karsten Blees writes:
> +`void hashmap_add(struct hashmap *map, void *entry)`::
> +
> + Adds a hashmap entry. This allows to add duplicate entries (i.e.
> + separate values with the same key according to hashmap_cmp_fn).
> ++
> +`map` is the hashmap structure.
> ++
> +`entry` is the entry
The existing hashtable implementation (in hash.[ch]) uses open addressing
(i.e. resolve hash collisions by distributing entries across the table).
Thus, removal is difficult to implement with less than O(n) complexity.
Resolving collisions of entries with identical hashes (e.g. via chaining)
is lef
6 matches
Mail list logo