raiden00pl commented on PR #3397: URL: https://github.com/apache/nuttx-apps/pull/3397#issuecomment-3846000278
> Let's check the real code to say whether it is "garbage". testing/sched/rwsem_test/rwsem_comprehensive_test.c Use posix APIs and can pass on Linux, then this case can check NuttX API and can become the official testcase. Then why it is "garbage" ? @GUIDINGLI Generated file is not even close to nuttx standard and should not be merged. Of course, it can be used to verify functionality; I have nothing against it. It can even be included in a PR as proof of testing. But it's not suitable for an upstream repo in this stage, without fixes that most likely can only human do. "AI garbage" is just another term for "AI slop" https://en.wikipedia.org/wiki/AI_slop It may be more offensive, but I like it. > I can't assure AI-generated code suit with Apache. But you can't say it not suit with Apache either. I agree, but we have to be defensive when it comes to licensing. Copyleft licences are "viral" in nature and they can completely ruin your product. Keep in mind that the approach to copyright varies in different parts of the world. While violating the GPL might not matter much in your country, in Europe it could destroy you and your company. > The main difference between us is our stance on AI tools. I firmly believe the industry should be open and inclusive to AI as a new productive force. Despite its flaws like imprecise outputs and potential compliance issues, AI’s value in boosting R&D efficiency and cutting repetitive work costs is undeniable. Rather than writing it off entirely for individual problems, we should set up practical verification and optimization rules, making AI a helpful assistant for engineers and achieving better productivity via human-AI collaboration. As I said earlier, AI is a tool like any other. It can be used in the right way or it can be used in the wrong way. You can boost your productivity, or you can destroy (and introduce bugs and security holes). Just remember that by boosting your team's productivity with AI, someone else must later review your changes. If you don't review your AI output, you're shifting that responsibility onto the community, which is not OK. You're boosting your team productivity at the expense of the community. I certainly won't advocate for a complete AI ban, but some rules are essential. Besides, a complete AI ban is practically impossible, because how can it be verified? However, this isn't nuttx-specific problem, but a global one. It would be best to adapt AI rules from another large project, but I don't know if anyone has already implemented something like that. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
