Am 07.12.2013 00:52, schrieb Junio C Hamano:

> * kb/doc-exclude-directory-semantics (2013-11-07) 1 commit
>  - gitignore.txt: clarify recursive nature of excluded directories
>  Originally merged to 'next' on 2013-11-13
>  Kicked back to 'pu' to replace with a newer reroll ($gmane/237814
>  looked OK but there seems to have some loose ends in the
>  discussion).

I'm unaware of any loose ends, could you clarify?

Btw. $gmane/237814 seems to be a different topic, the version in next (and now 
in pu) was $gmane/237429.

> * kb/fast-hashmap (2013-11-18) 14 commits
>   (merged to 'next' on 2013-12-06 at f90be3d) 

Damn, a day too late :-) I found these two glitches a fixup patch OK 
or should I do a reroll (or separate patch on top)?


--- 8< ---
Subject: [PATCH] fixup! add a hashtable implementation that supports O(1) 

Use 'unsigned int' for hash-codes everywhere.

Extending 'struct hashmap_entry' with an int-sized member shouldn't waste
memory on 64-bit systems. This is already documented in api-hashmap.txt,
but needs '__attribute__((__packed__))' to work. Reduces e.g.

 struct name_entry {
     struct hashmap_entry ent;
     int namelen;
     char *name;

from 32 to 24 bytes.

Signed-off-by: Karsten Blees <>
 hashmap.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hashmap.h b/hashmap.h
index f5b3b61..b64567b 100644
--- a/hashmap.h
+++ b/hashmap.h
@@ -15,7 +15,7 @@ extern unsigned int memihash(const void *buf, size_t len);
 /* data structures */
-struct hashmap_entry {
+struct __attribute__((__packed__)) hashmap_entry {
        struct hashmap_entry *next;
        unsigned int hash;
@@ -43,7 +43,7 @@ extern void hashmap_free(struct hashmap *map, int 
 /* hashmap_entry functions */
-static inline void hashmap_entry_init(void *entry, int hash)
+static inline void hashmap_entry_init(void *entry, unsigned int hash)
        struct hashmap_entry *e = entry;
        e->hash = hash;
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to
More majordomo info at

Reply via email to