Thanks for the review! > > I did some testing by triggering an OOM, a no-OOM, and an OOM with > > eviction afterwards, as a sanity check. All looks good. > > I've attached the tests I created with other basic testing, as dshash > > lacks direct testing in general, if you're inclined to add them. > > I tried running a coverage report for this. It's pretty good. It > doesn't cover these things: > > - dshash_create: OOM while allocating the bucket array. > - dshash_find_or_insert_extended: OOM while inserting an item > (apparently, at least here, it instead hits OOM while resizing) > - dshash_delete_key: the case where the entry not found > - dshash_dump: not called at all > - resize: the case where we exit early due to a concurrent resize > - delete_item_from_bucket: the case where there is more than one item > in the bucket These are good to add. I also included delete_entry and dshash_dump for more coverage. delete_item_from_bucket will require us to hash the item to the same bucket, and I'm not sure it's worth the hassle. resize() will occur in the OOM test.
> I think that adding a call to dshash_dump() somewhere would probably
Done
> make sense, and I'd suggest also trying to delete a nonexistent key.
Done
>
> On the code itself:
>
> + /* Verify all entries via find. */
> + for (int i = 0; i < count; i++)
> + {
> + entry = dshash_find(ht, &i, false);
> + if (entry == NULL)
> + elog(ERROR, "key %d not found", i);
> + dshash_release_lock(ht, entry);
> + }
>
> You could verify that entry->key == i. The current code wouldn't
> notice if the hash table returned a garbage pointer. You could
> possibly also include some kind of a payload in each record and verify
> that, e.g. set entry->value =
> some_homomorphism_over_0_to_9(entry->key).
I added key verification as you suggested.
> + dsa_set_size_limit(area, dsa_minimum_size());
>
> This is an abuse of the documented purpose of dsa_set_size_limit().
> Seems better to just pick a constant here.
I changed this to use a different limit.
> + /* Insert until OOM — without NO_OOM flag, this should raise ERROR. */
> + for (;;)
>
> I think it would be safer to code this as a fixed iteration count. For
> example, if you choose the size limit so that we should run out of
> memory after 10 entries, you could terminate this loop after 1000
> iterations. That way, if something goes wrong, we're more likely to
> see "expected out-of-memory, but completed all %d iterations" in the
> log rather than a core dump or whatever.
Done. I also removed the OOM retry test and just kept a simple
test. Adding entries after a delete is now happening in the basic test.
> I+ {
> + TestDshashEntry *entry;
> +
> + entry = dshash_find_or_insert(ht, &key, &found);
> + dshash_release_lock(ht, entry);
> + key++;
> + }
>
> In other places, you check for entry == NULL, but not here.
Fixed.
> I just got in trouble for letting a bare block sneak into my code, so
> now it's my turn to complain.
ugggh. fixed.
> I suggest git config user.email in whatever directory you use to
> generate patches like this, just to make life a bit easier for anyone
> who might eventually be committing one of them.
Sorry. I usually do, but I started using a new machine :(
--
Sami Imseih
Amazon Web Services (AWS)
v2-0001-Add-test-module-for-dshash.patch
Description: Binary data
