I have been using AI chatbots to write nim code for a while. Some observations:
1. Anthropic's claude 3 opus produces nim code that compiles without any adjustments - most of the time. I have yet to try GPT-4o but it's on my todo-list. 2. It's great to write tests, that's the most boring part of development and the much needed one, as most nim libraries could use more polish. 3. I have tried to use it in order to come up with cases that break the code, as evangelized by this [post](https://verse.systems/blog/post/2024-03-09-using-llms-to-generate-fuzz-generators/) I would say this myth is busted, LLMs are not yet capable of finding serious bugs, but a structured fuzzer can. Even when I stirred it towards a flaw it would still miss it. 4. nim has such a [fuzzer](https://github.com/status-im/nim-drchaos) but I don't see much adoption yet, please try it out. At least for open-source projects. 5. Back on the topic of strengthening nim's adoption, I have made some first steps in using LLMs to produce wrappers for well-known libraries like [vulkan](https://github.com/planetis-m/vulkan-tut/blob/master/vulkan_wrapper.nim) on top of the tool-generated bindings. 6. My prompts looked like "Create a idiomatic wrapper for the nimgl/vulkan bindings in the style of Vulkan-Hpp" but I have since lost my prompts because anthropic did a hard-reset in Europe. It would be nice to eventually have an automated script which sends all the prompts and generates code. 7. As far as social media coverage, it would be cool if more people used twitter to promote nim, I have been using AI in order to condense my thoughts to fit the twitters character limit.