So, can at least one person say uthash will be fine to use in OpenOCD code?

After the lack of response on this thread last time I went ahead and did an
implementation using gnulib which I now have to rewrite using something
else. I'd like to only do that once.

Tim

On Tue, May 18, 2021 at 11:42 AM Tim Newsome <[email protected]> wrote:

> It turns out gnulib is GPL3 which is not compatible with our GPL2.
>
> So... How about uthash <https://troydhanson.github.io/uthash/>? Antonio
> says the license is compatible.
>
> Here are his observations:
>
>> Uthash is already packaged as a library in debian, Ubuntu, Arch Linux.
>> Plus it is just a set of files .h! Everything is done through macros.
>> It is required only during the build, not at runtime.
>> I think it should be added as a git submodule, as jimtcl, with the
>> possibility to ignore the submodule and compile OpenOCD with the
>> version of uthash already installed on the PC.
>> Copying uthash inside OpenOCD code is not my preferred choice because
>> it will add extra work to track the upstream version and keep OpenOCD
>> in sync.
>
>
> That all sounds good to me.
>
> Tim
>
> On Wed, Feb 17, 2021 at 12:20 PM Tim Newsome <[email protected]> wrote:
>
>> On Tue, Feb 16, 2021 at 4:17 PM Steven Stallion via OpenOCD-devel <
>> [email protected]> wrote:
>>
>>> Out of curiosity, why even bother with a more advanced structure when a
>>> list will suffice? I agree it feels gross to iterate to find a given
>>> thread, however we're looking at a relatively low number of iterations. Is
>>> it really worth introducing an additional dependency to avoid a loop?
>>>
>>
>> A list is fine. But I don't want to have to implement that list. linux.c
>> implements its own list. list.h implements a slightly different list, but
>> doesn't implement look-up by a key. I could build something on top of that
>> list, or write my own from scratch. That's just a big waste of effort
>> though, given that there already are fast, well-tested implementations out
>> there. If I'm grabbing a library from somewhere I'll get one that is
>> efficient even with lots of data, because it'll save somebody someday, and
>> is no extra work for me.
>>
>> Tim
>>
>


Reply via email to