On 2017-05-05 02:23, Malcolm Greene wrote:
I have a bunch of pickled dicts I would like to merge. I only want to
merge unique keys but I want to track the keys that are duplicated
across dicts. Is there a newer dict-like data structure that is fine
tuned to that use case?
Short of an optimized data structure, my plan is to convert dict keys to
sets and compare these sets to determine which keys are unique and can
be merged and which keys are dupes and should be tracked in that manner.
At a high level, does this sound like a reasonable approach?
Thank you,
Malcolm

(Assuming Python 3.)

Suppose you have a number of dicts:

    dict_1
    dict_2
    dict_3

Collect all of their keys into a list:

    all_keys = list(dict_1) + list(dict_2) + list(dict_3)

How many times does each of the keys occur?

    from collections import Counter
    key_count = Counter(all_keys)

Which keys are unique?

    unique_keys = [key for key in all_keys if key_count[key] == 1]
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to